Chapter 8 KRYLOV SUBSPACE METHODS

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 8 KRYLOV SUBSPACE METHODS Chapter 8 KRYLOV SUBSPACE METHODS A more readable reference is the book by Lloyd N. Trefethen and David Bau. 8.1 Krylov Subspaces 8.2 Arnoldi Orthogonalization 8.3 Generalized Minimal Residual Method 8.4 Conjugate Gradient Method 8.5 Biconjugate Gradient Method 8.6 Biconjugate Gradient Stabilized Method 8.1 Krylov Subspaces Saad, Sections 6.1, 6.2 omitting Proposition 6.3 In a Krylov subspace method i−1 xi − x0 ∈Ki(A, r0)= span{r0,Ar0,...,A r0}. We call Ki(A, r0)aKrylov subspace.Equivalently, i−1 xi ∈ x0 +span{r0,Ar0,...,A r0}, which we call a linear manifold. −1 −1 The exact solution A b = x0 + A r0.Theminimal polynomial of A is the polynomial m m−1 p(x)=x + cm−1x + ···+ c1x + c0 of lowest degree m such that p(A)=0.IfA is diagonalizable, m is the number of distinct eigenvalues. To see this, let A have distinct eigenvalues λ1, λ2, ..., λm, and define m p(λ)=(λ − λ1)(λ − λ2) ···(λ − λm)=cmλ + ···+ c1λ + c0. 167 Writing A = XΛX−1,wehave −1 p(A)=X(Λ − λ1I)(Λ − λ2I) ···(Λ − λmI)X =0. If A has is no eigenvalue equal to zero, A−1 is a linear combination of I,A,...,Am−1, 2 3−1 2 3 2 3 1 1 1 3 1 e.g., 4 1 5 = 4 1 5 − 4 1 5 . 2 2 2 1 2 To see this, write m−1 m−2 −1 A + cm−1A + ···+ c1I + c0A =0. −1 m−1 −1 m−1 Hence A r0 ∈ span {r0,Ar0,...,A r0}, A b ∈ x0 +span{r0,Ar0,...,A r0},and −1 xm = A b. This is the finite termination property. (In practice we do not go this far.) Review questions 1. What is a Krylov subspace? i−1 2. Is x0 +span{r0,Ar0,...,A r0} a Krylov subspace? 3. Which of the following methods choose their search directions from a Krylov subspace: cyclic coordinate descent? steepest descent? For each that does, what is its Krylov subspace? 4. What is the minimal polynomial of a square matrix A? 5. What is the finite termination property? Exercise 1. Use the characteristic polynomial of a nonsingular square matrix A to show that A−1 can be expressed as a polynomial of degree at most n − 1inA. 8.2 Arnoldi Orthogonalization Saad, Section 6.3 For numerical stability we incrementally construct orthonormal bases {q1,q2,...,qk} for the Krylov subspaces. However, rather than applying the Gram-Schmidt process to the k−1 sequence r0 , Ar0, ..., A r0, we use what is known as the Arnoldi process.Itisbased on the fact that each Krylov subspace can be obtained from the orthonormal basis of the Krylov subspace of one dimension less using the spanning set q1, q2, ..., qk, Aqk.Inother words a new direction in the expanded Krylov subspace can be created by multiplying the 168 k−1 most recent basis vector qk by A rather than by multiplying A r0 by A.Weremovefrom this new direction Aqk its orthogonal projection onto q1, q2, ..., qk obtaining the direction vk+1 = Aqk − q1h1k − q2h2k −···−qkhkk where the coefficients hi,k are determined by the orthogonality conditions. This computation should be performed using the modified Gram-Schmidt iteration: t = Aqk; for j = 1,2,..., k do { T / ∗ t =(I − qjqj )t ∗ / T hjk = qj t; t = t − qjhjk; } Normalization produces qk+1 = vk+1/hk+1,k. The coefficients hij have been labeled so that we can write AQk = Qk+1H¯k (8.1) where Qk := [q1,q2,...,qk]andH¯k is a k +1byk upper Hessenberg matrix whose (i, j)th element for j ≥ i − 1ishij. Given the basis q1 , q2, ...,qk, we can express any element of the kth dimensional Krylov subspace as Qk y for some k-vector y. Review questions 1. In the Arnoldi process for orthogonalizing a Krylov subspace Ki(A, r0),howiseach new basis vector qk+1 produced? 2. The relationship among the first k + 1 vectors produced by the Arnoldi process for Ki(A, r0) can be summarized as AQk = Qk+1H¯k where Qk is composed of the first k vectors. What are dimensions of H¯k and what T other property does it have? What is Qk AQk in terms of H¯k? 3. Let Qk be composed of the first k vectors of the Arnoldi process for Ki(A, r0). What is r0 in terms of Qk and kr0k? 169 8.3 Generalized Minimal Residual Method Saad, Sections 6.5.1, 6.5.3–6.5.5. GMRES, “generalized minimal residuals,” is a popular iterative method for nonsymmetric matrices A. It is based on the principle of minimizing the norm of the residual kb − Axk2, since the energy norm is available only for an s.p.d. matrix. It is, however, not a gradient method; rather it chooses for the correction xk − x0 that element of the Krylov subspace k−1 span{r0,Ar0,...,A r0} which minimizes the 2-norm of the residual. For numerical stability we construct orthonormal bases {q1,q2,...,qk} for the Krylov subspaces using the Arnoldi process. We can express any element of the kth dimensional Krylov subspace as Qk y for some k-vector y. Thus, the minimization problem becomes min kb − A(x0 + Qk y)k2. y This is a linear least squares problem involving k+1 unknowns and n equations. The number of equations can be reduced to k + 1 by using eq. (8.1) to get kb − A(x0 + Qk y)k2 = kr0 − AQk yk2 = kρq1 − AQk yk2 where ρ = kr0k = kQk+1(ρe1 − H¯ky)k2 = kρe1 − H¯kyk2. Review questions 1. For GMRES applied to Ax = b,whereA is n by n, from what subset of Rn does one choose the approximation xk given an initial guess x0? 2. How many matrix–vector multiplications are required for each iteration of GMRES? 3. What is the optimality property of GMRES? 4. For the GMRES solution from a k-dimensional subspace, one solves a least squares problem of reduced dimension. What are the dimensions of the coefficient matrix in this problem, and what is its special property? 8.4 Conjugate Gradient Method Saad, Sections 6.7.1, 6.11.3. Considered here is the case where A is symmetric. Let Hk be all but the last row of H¯k. T Then Qk AQk = Hk, which is square and upper Hessenberg. Since A is symmetric, Hk is tridiagonal, and we write H¯k = T¯k,Hk = Tk. 170 Clearly, it is unnecessary to compute elements of Tk known to be zero, thus reducing the cost from O(k2)toO(k). Assuming A is also positive definite, we choose xk to be that element of x0 + Kk(A, r0) −1 which is closest to A b in energy norm. Hence, each iterate xk has the following optimality property: −1 −1 |||xk − A b||| =min{|||x − A b||| : x ∈ x0 + Kk(A, r0). In Section 7.3 this is shown to be equivalent to making the residual b − Axk orthogonal to Kk(A, r0)=R(Qk). Writing xk = x0 + Qk y, this becomes T 0=Qk (b − Axk) T = Qk (r0 − AQk y) T = Qk (ρq1 − AQk y) = ρe1 − Tky, a tridiagonal system to solve for y. Although the conjugate gradient method can be derived from the Lanczos orthogonal- ization described above, it is more usual to start from method of steepest descent. The conjugate gradient method constructs direction pi from the gradients r0,r1,r2,....These directions are conjugate T pi Apj =0ifi =6 j, or orthogonal in the energy inner product. We skip the details and simply state the result: x0 = initial guess; r0 = b − Ax0; p0 = r0; for i =0, 1, 2,...do { rTr α i i i = T ; pi Api xi+1 = xi + αipi; ri+1 = ri − αiApi; rT r p r i+1 i+1 p i+1 = i+1 + T i; ri ri } The cost per iteration is 1 matrix · vector Api T T 2 vector · vector ri+1ri+1, pi (Api) 3 vector + scalar · vector 171 2 3 −1 for a total cost of 1 matrix–vector product and 5n multiplications. For 4 −14−1 5 −1 the cost of matrix·vector is 2.5n “multiplications.” We note that ri ∈Ki(A, r0)andpi ∈Ki(A, r0). Moreover, it can be shown that the gradients {r0,r1,...,ri−1} constitute an orthogonal basis for the Krylov subspace. convergence rate p !k −1 κ2(A) − 1 −1 |||xk − A b||| ≤ p |||x0 − A b||| κ2(A)+1 p 1/2 κ2(A) 1 n 1 To reduce enery norm of error by ε requires ≈ log iterations, e.g., log . 2 ε π ε Review questions 1. What does it mean for the directions of the conjugate gradient method to be conjugate? 2. What is the optimality property of the conjugate gradient method? 3. How many matrix–vector multiplications are required for each iteration of CG? 4. On what property of the matrix does the rate of convergence of conjugate gradient depend? Exercises 1. Given xi,ri,pi, one conjugate gradient iteration is given by T T αi = ri ri/(pi Api) ri+1 = ri − αiApi rT r x x α p p r i+1 i+1 p i+1 = i + i i i+1 = i+1 + T i ri ri where A is assumed to be symmetric positive definite. Suppose you are given static void ax(float[] x, float[] y) { int n = x.length(); // Given x this method returns y where // y[i] = a[i][0]*x[0] +...+ a[i][n-1]*x[n-1] ..
Recommended publications
  • Krylov Subspaces
    Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra is the amount of memory needed to store a large matrix and the amount of time needed to read its entries. Methods using Krylov subspaces avoid this difficulty by studying how a matrix acts on vectors, making it unnecessary in many cases to create the matrix itself. More specifically, we can construct a Krylov subspace just by knowing how a linear transformation acts on vectors, and with these subspaces we can closely approximate eigenvalues of the transformation and solutions to associated linear systems. The Arnoldi iteration is an algorithm for finding an orthonormal basis of a Krylov subspace. Its outputs can also be used to approximate the eigenvalues of the original matrix. The Arnoldi Iteration The order-N Krylov subspace of A generated by x is 2 n−1 Kn(A; x) = spanfx;Ax;A x;:::;A xg: If the vectors fx;Ax;A2x;:::;An−1xg are linearly independent, then they form a basis for Kn(A; x). However, this basis is usually far from orthogonal, and hence computations using this basis will likely be ill-conditioned. One way to find an orthonormal basis for Kn(A; x) would be to use the modi- fied Gram-Schmidt algorithm from Lab TODO on the set fx;Ax;A2x;:::;An−1xg. More efficiently, the Arnold iteration integrates the creation of fx;Ax;A2x;:::;An−1xg with the modified Gram-Schmidt algorithm, returning an orthonormal basis for Kn(A; x).
    [Show full text]
  • Accelerating the LOBPCG Method on Gpus Using a Blocked Sparse Matrix Vector Product
    Accelerating the LOBPCG method on GPUs using a blocked Sparse Matrix Vector Product Hartwig Anzt and Stanimire Tomov and Jack Dongarra Innovative Computing Lab University of Tennessee Knoxville, USA Email: [email protected], [email protected], [email protected] Abstract— the computing power of today’s supercomputers, often accel- erated by coprocessors like graphics processing units (GPUs), This paper presents a heterogeneous CPU-GPU algorithm design and optimized implementation for an entire sparse iter- becomes challenging. ative eigensolver – the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) – starting from low-level GPU While there exist numerous efforts to adapt iterative lin- data structures and kernels to the higher-level algorithmic choices ear solvers like Krylov subspace methods to coprocessor and overall heterogeneous design. Most notably, the eigensolver technology, sparse eigensolvers have so far remained out- leverages the high-performance of a new GPU kernel developed side the main focus. A possible explanation is that many for the simultaneous multiplication of a sparse matrix and a of those combine sparse and dense linear algebra routines, set of vectors (SpMM). This is a building block that serves which makes porting them to accelerators more difficult. Aside as a backbone for not only block-Krylov, but also for other from the power method, algorithms based on the Krylov methods relying on blocking for acceleration in general. The subspace idea are among the most commonly used general heterogeneous LOBPCG developed here reveals the potential of eigensolvers [1]. When targeting symmetric positive definite this type of eigensolver by highly optimizing all of its components, eigenvalue problems, the recently developed Locally Optimal and can be viewed as a benchmark for other SpMM-dependent applications.
    [Show full text]
  • Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions
    Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions Xin Liu∗ Zaiwen Weny Yin Zhangz March 22, 2012 Abstract In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value de- compositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms. Keywords. subspace optimization, dominant singular value decomposition, Krylov subspace, eigen- value decomposition 1 Introduction Singular value decomposition (SVD) is a fundamental and enormously useful tool in matrix computations, such as determining the pseudo-inverse, the range or null space, or the rank of a matrix, solving regular or total least squares data fitting problems, or computing low-rank approximations to a matrix, just to mention a few. The need for computing SVDs also frequently arises from diverse fields in statistics, signal processing, data mining or compression, and from various dimension-reduction models of large-scale dynamic systems. Usually, instead of acquiring all the singular values and vectors of a matrix, it suffices to compute a set of dominant (i.e., the largest) singular values and their corresponding singular vectors in order to obtain the most valuable and relevant information about the underlying dataset or system.
    [Show full text]
  • Power Method and Krylov Subspaces
    Power method and Krylov subspaces Introduction When calculating an eigenvalue of a matrix A using the power method, one starts with an initial random vector b and then computes iteratively the sequence Ab, A2b,...,An−1b normalising and storing the result in b on each iteration. The sequence converges to the eigenvector of the largest eigenvalue of A. The set of vectors 2 n−1 Kn = b, Ab, A b,...,A b , (1) where n < rank(A), is called the order-n Krylov matrix, and the subspace spanned by these vectors is called the order-n Krylov subspace. The vectors are not orthogonal but can be made so e.g. by Gram-Schmidt orthogonalisation. For the same reason that An−1b approximates the dominant eigenvector one can expect that the other orthogonalised vectors approximate the eigenvectors of the n largest eigenvalues. Krylov subspaces are the basis of several successful iterative methods in numerical linear algebra, in particular: Arnoldi and Lanczos methods for finding one (or a few) eigenvalues of a matrix; and GMRES (Generalised Minimum RESidual) method for solving systems of linear equations. These methods are particularly suitable for large sparse matrices as they avoid matrix-matrix opera- tions but rather multiply vectors by matrices and work with the resulting vectors and matrices in Krylov subspaces of modest sizes. Arnoldi iteration Arnoldi iteration is an algorithm where the order-n orthogonalised Krylov matrix Qn of a matrix A is built using stabilised Gram-Schmidt process: • start with a set Q = {q1} of one random normalised vector q1 • repeat for k =2 to n : – make a new vector qk = Aqk−1 † – orthogonalise qk to all vectors qi ∈ Q storing qi qk → hi,k−1 – normalise qk storing qk → hk,k−1 – add qk to the set Q By construction the matrix Hn made of the elements hjk is an upper Hessenberg matrix, h1,1 h1,2 h1,3 ··· h1,n h2,1 h2,2 h2,3 ··· h2,n 0 h3,2 h3,3 ··· h3,n Hn = , (2) .
    [Show full text]
  • Section 5.3: Other Krylov Subspace Methods
    Section 5.3: Other Krylov Subspace Methods Jim Lambers March 15, 2021 • So far, we have only seen Krylov subspace methods for symmetric positive definite matrices. • We now generalize these methods to arbitrary square invertible matrices. Minimum Residual Methods • In the derivation of the Conjugate Gradient Method, we made use of the Krylov subspace 2 k−1 K(r1; A; k) = spanfr1;Ar1;A r1;:::;A r1g to obtain a solution of Ax = b, where A was a symmetric positive definite matrix and (0) (0) r1 = b − Ax was the residual associated with the initial guess x . • Specifically, the Conjugate Gradient Method generated a sequence of approximate solutions x(k), k = 1; 2;:::; where each iterate had the form k (k) (0) (0) X x = x + Qkyk = x + qjyjk; k = 1; 2;:::; (1) j=1 and the columns q1; q2;:::; qk of Qk formed an orthonormal basis of the Krylov subspace K(b; A; k). • In this section, we examine whether a similar approach can be used for solving Ax = b even when the matrix A is not symmetric positive definite. (k) • That is, we seek a solution x of the form of equation (??), where again the columns of Qk form an orthonormal basis of the Krylov subspace K(b; A; k). • To generate this basis, we cannot simply generate a sequence of orthogonal polynomials using a three-term recurrence relation, as before, because the bilinear form T hf; gi = r1 f(A)g(A)r1 does not satisfy the requirements for a valid inner product when A is not symmetric positive definite.
    [Show full text]
  • A Brief Introduction to Krylov Space Methods for Solving Linear Systems
    A Brief Introduction to Krylov Space Methods for Solving Linear Systems Martin H. Gutknecht1 ETH Zurich, Seminar for Applied Mathematics [email protected] With respect to the “influence on the development and practice of science and engineering in the 20th century”, Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton’s method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached.
    [Show full text]
  • Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1
    Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1 Richard Barrett2, Michael Berry3, Tony F. Chan4, James Demmel5, June M. Donato6, Jack Dongarra3,2, Victor Eijkhout7, Roldan Pozo8, Charles Romine9, and Henk Van der Vorst10 This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied Mathematics (http://www.siam.org/books). 1This work was supported in part by DARPA and ARO under contract number DAAL03-91-C-0047, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-84OR21400, and the Stichting Nationale Computer Faciliteit (NCF) by Grant CRG 92.03. 2Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830- 6173. 3Department of Computer Science, University of Tennessee, Knoxville, TN 37996. 4Applied Mathematics Department, University of California, Los Angeles, CA 90024-1555. 5Computer Science Division and Mathematics Department, University of California, Berkeley, CA 94720. 6Science Applications International Corporation, Oak Ridge, TN 37831 7Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX 78758 8National Institute of Standards and Technology, Gaithersburg, MD 9Office of Science and Technology Policy, Executive Office of the President 10Department of Mathematics, Utrecht University, Utrecht, the Netherlands. ii How to Use This Book We have divided this book into five main chapters. Chapter 1 gives the motivation for this book and the use of templates. Chapter 2 describes stationary and nonstationary iterative methods.
    [Show full text]
  • Iterative Methods and Sparse Linear Algebra
    Lecture 16: Iterative Methods and Sparse Linear Algebra David Bindel 25 Oct 2011 Logistics I Send me a project title and group (today, please!) I Project 2 due next Monday, Oct 31 <aside topic="proj2"> Bins of particles 2h // x bin and interaction range (y similar) int ix = (int) ( x /(2*h) ); int ixlo = (int) ( (x-h)/(2*h) ); int ixhi = (int) ( (x+h)/(2*h) ); Spatial binning and hashing I Simplest version I One linked list per bin I Can include the link in a particle struct I Fine for this project! I More sophisticated version I Hash table keyed by bin index I Scales even if occupied volume computational domain Partitioning strategies Can make each processor responsible for I A region of space I A set of particles I A set of interactions Different tradeoffs between load balance and communication. To use symmetry, or not to use symmetry? I Simplest version is prone to race conditions! I Can not use symmetry (and do twice the work) I Or update bins in two groups (even/odd columns?) I Or save contributions separately, sum later Logistical odds and ends I Parallel performance starts with serial performance I Use flags — let the compiler help you! I Can refactor memory layouts for better locality I You will need more particles to see good speedups I Overheads: open/close parallel sections, barriers. I Try -s 1e-2 (or maybe even smaller) I Careful notes and a version control system really help I I like Git’s lightweight branches here! </aside> Reminder: World of Linear Algebra I Dense methods I Direct representation of matrices with simple
    [Show full text]
  • Krylov Subspace Iteration
    the Top T HEME ARTICLE KRYLOV SUBSPACE ITERATION This survey article reviews the history and current importance of Krylov subspace iteration algorithms. ince the early 1800s, researchers have lead to approximate relationships between these considered iteration methods an attrac- parameters in neighboring gridpoints. Together tive means for approximating the solu- with the prescribed behavior at the boundary tions of large linear systems. They make gridpoints and with given sources, this leads Sthese solutions possible now that we can do re- eventually to very large linear systems of equa- alistic computer simulations. The classical itera- tions, Ax = b. The vector x is the unknown para- tion methods typically converge very slowly (and meter values in the gridpoints, b is the given in- often not at all). Around 1950, researchers real- put, and the matrix A describes the relationships ized that these methods lead to solution se- between parameters in the gridpoints. Because quences that span a subspace—the Krylov sub- these relationships are often restricted to nearby space. It was then evident how to identify much gridpoints, most matrix elements are zero. better approximate solutions, without much ad- The model becomes more accurate when we ditional computational effort. refine the grid—that is, when the distance be- When simulating a continuous event, such as tween gridpoints decreases. In a 3D simulation, the flow of a fluid through a pipe or of air around this easily leads to large systems of equations. an aircraft, researchers usually impose a grid over Even a few hundred gridpoints in each coordi- the area of interest and restrict the simulation to nate direction leads to systems with millions of the computation of relevant parameters.
    [Show full text]
  • Modeling Circulating Cavity Fields Using the Discrete Linear Canonical
    Letter Journal of the Optical Society of America A 1 Modeling circulating cavity fields using the discrete linear canonical transform A. A. CIOBANU1,2,*,D.D.B ROWN1,2,P.J.V EITCH1,2, AND D. J. OTTAWAY1,2 1Department of Physics, School of Physical Sciences and The Institute of Photonics and Advanced Sensing (IPAS), The University of Adelaide, SA, 5005, Australia 2Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav) *Corresponding author: [email protected] Compiled June 8, 2021 Fabry-Perot cavities are central to many optical mea- model is determined by the resolution of the sampled grid. This surement systems. In high precision experiments, such allows non gaussian features to be accurately modelled to the as aLIGO and AdV, coupled cavities are often required resolution limit of the sampled grid. leading to complex optical dynamics, particularly when In the cartesian basis the diffraction integral becomes a linear optical imperfections are considered. We show, for the operator that couples all points in the input plane to all points in the output plane. This causes the issue that in two dimensions first time, that discrete LCTs can be used to compute cir- the size of the linear operator grows as the square of the number culating optical fields for cavities in which the optics of grid points, quickly becoming too large for practical simula- have arbitrary apertures, reflectance and transmittance tions due to memory limitations. A common optimization is to perform the diffraction integrals in the Fourier domain where profiles, and shape. We compare the predictions of LCT the convolution in the diffraction integral becomes multiplica- models with those of alternative methods.
    [Show full text]
  • A Restarted Krylov Subspace Method for the Evaluation of Matrix Functions∗
    SIAM J. NUMER. ANAL. c 2006 Society for Industrial and Applied Mathematics Vol. 44, No. 6, pp. 2481–2504 ! A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS∗ MICHAEL EIERMANN† AND OLIVER G. ERNST† Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the reciprocal and the exponential functions. We further show that the restarted algorithm inherits the superlinear convergence property of its unrestarted counterpart for entire functions and present the results of numerical experiments. Key words. matrix function, Krylov subspace approximation, Krylov projection method, restarted Krylov subspace method, linear system of equations, initial value problem AMS subject classifications. 65F10, 65F99, 65M20 DOI. 10.1137/050633846 1. Introduction. The evaluation of n n n (1.1) f(A)b, where A C × , b C , ∈ ∈ and f : C D C is a function for which f(A) is defined, is a common compu- tational task.⊃ Besides→ the solution of linear systems of equations, which involves the reciprocal function f(λ) = 1/λ, by far the most important application is the time tλ evolution of a system under a linear operator, in which case f(λ) = ft(λ) = e and time acts as a parameter t. Other applications involving differential equations require the evaluation of (1.1) for the square root and trigonometric functions (see [8, 1]). Further applications include identification problems for semigroups involving the log- arithm (see, e.g., [29]) and lattice quantum chromodynamics simulations requiring the evaluation of the matrix sign function (see [34] and the references therein).
    [Show full text]
  • Krylov Type Methods for Large Scale Eigenvalue Computations
    University of Zagreb Department of Mathematics Doctor of Philosophy Dissertation Krylov Type Methods for Large Scale Eigenvalue Computations by Zvonimir Bujanovi´c Supervisor: prof. Zlatko Drmaˇc Zagreb, 2011. Contents Contents i Introduction iii 1 The Eigenvalue Problem 1 1.1 Notation . 1 1.2 Eigenvalues, eigenvectors and eigenspaces . 3 1.3 Applications of the eigenvalue problem . 8 1.3.1 Electromagnetic fields in cavities . 8 1.3.2 The PageRank algorithm . 12 1.4 An overview of the perturbation theory . 15 1.4.1 Perturbations of eigenvalues . 16 1.4.2 Perturbations of invariant spaces . 19 1.5 Approximations from a subspace . 22 1.5.1 Evaluating quality of a subspace . 22 1.5.2 Choosing approximations for eigenpairs . 26 2 Arnoldi-type Algorithms 35 2.1 The power method . 36 2.2 Simultaneous iteration . 41 2.2.1 The QR algorithm . 44 2.3 Krylov subspaces . 49 2.3.1 Ritz approximations from Krylov subspaces . 52 2.4 The basic Arnoldi algorithm . 57 2.4.1 The Hermitian case: the Lanczos algorithm . 59 2.5 Restarting the Arnoldi algorithm . 61 2.5.1 The implicit restart . 63 2.5.2 The Krylov-Schur algorithm . 69 2.5.3 Convergence of the restarted Arnoldi method . 71 3 Using Krylov{Schur algorithm with arbitrary shifts 75 i Contents 3.1 A more general restarting scheme . 76 3.2 Restarting and the pole placement problem . 79 3.3 Matrix balancing and the computation of the Ritz values . 85 4 Ritz values of normal matrices 91 4.1 Minimax theorem and the Cauchy interlacing property .
    [Show full text]