Solving Matrix Eigenvalue Equations

Solving Matrix Eigenvalue Equations

Nuclear Talent course: Computational Many-body Methods for Nuclear Physics Morten Hjorth-Jensen(1), Calvin Johnson(2), Francesco Pederiva(3), and Kevin Schmidt(4) (1)Michigan State University and University of Oslo (2)San Diego State University (3)University of Trento (4)Arizona State University ECT*, June 25-July 13 Lecture slides with exercise for Friday July 6 2012 1 / 90 Topics second week, Friday Friday: diagonalization algorithms I Householder’s (QR) algorithms for smaller systems, typically with dimensionalities less than 105 basis states. We will also discuss Jacobi’s method as a warm-up case. I Basics of Krylov methods (iterative methods) with Lanczos algorithm for symmetric matrices I Today’s aim is to build your own Lanczos algorithm using the single-particle basis and Slater determinant basis algorithms you built yesterday. The example we will use is that of the simple pairing model discussed the first Monday. 2 / 90 Eigenvalue Solvers One speaks normally of two main approaches to solving the eigenvalue problem. The first is the formal method, involving determinants and the characteristic polynomial. This proves how many eigenvalues there are, and is the way most of you learned about how to solve the eigenvalue problem, but for matrices of dimensions greater than 2 or 3, it is rather impractical. The other general approach is to use similarity or unitary tranformations to reduce a matrix to diagonal form. Almost always this is done in two steps: first reduce to for example a tridiagonal form, and then to diagonal form. The main algorithms we will discuss in detail, the Householder (a so-called direct method) and Lanczos algorithms (an iterative method), follow this methodology. 3 / 90 Diagonalization methods, direct methods Direct or non-iterative methods require for matrices of dimensionality n × n typically O(n3) operations. These methods are normally called standard methods and are used for dimensionalities n ∼ 105 or smaller. A brief historical overview Year n 1950 n = 20 (Wilkinson) 1965 n = 200 (Forsythe et al.) 1980 n = 2000 Linpack 1995 n = 20000 Lapack 2012 n ∼ 105 Lapack shows that in the course of 60 years the dimension that direct diagonalization methods can handle has increased by almost a factor of 104. However, it pales beside the progress achieved by computer hardware, from flops to petaflops, a factor of almost 1015. We see clearly played out in history the O(n3) bottleneck of direct matrix algorithms. Sloppily speaking, when n ∼ 104 is cubed we have O(1012) operations, which is smaller than the 1015 increase in flops. 4 / 90 Diagonalization methods Why iterative methods? If the matrix to diagonalize is large and sparse, direct methods simply become impractical, also because many of the direct methods tend to destroy sparsity. As a result large dense matrices may arise during the diagonalization procedure. The idea behind iterative methods is to project the n−dimensional problem in smaller spaces, so-called Krylov subspaces. Given a matrix A^ and a vector v^, the associated Krylov sequences of vectors (and thereby subspaces) v^, A^v^, A^2v^, A^3v^;::: , represent successively larger Krylov subspaces. Matrix A^x^ = b^ A^x^ = λx^ A^ = A^∗ Conjugate gradient Lanczos A^ 6= A^∗ GMRES etc Arnoldi 5 / 90 Important Matrix and vector handling packages The Numerical Recipes codes have been rewritten in Fortran 90/95 and C/C++ by us. The original source codes are taken from the widely used software package LAPACK, which follows two other popular packages developed in the 1970s, namely EISPACK and LINPACK. I LINPACK: package for linear equations and least square problems. I LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK’s website http://www.netlib.org it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available. I BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from http://www.netlib.org. 6 / 90 Basic Matrix Features Matrix Properties Reminder 0 1 0 1 a11 a12 a13 a14 1 0 0 0 B a21 a22 a23 a24 C B 0 1 0 0 C A = B C I = B C @ a31 a32 a33 a34 A @ 0 0 1 0 A a41 a42 a43 a44 0 0 0 1 The inverse of a matrix is defined by A−1 · A = I 7 / 90 Basic Matrix Features Matrix Properties Reminder Relations Name matrix elements T A = A symmetric aij = aji T −1 P P A = A real orthogonal k aik ajk = k aki akj = δij ∗ ∗ A = A real matrix aij = aij y ∗ A = A hermitian aij = aji y−1 P ∗ P ∗ A = A unitary k aik ajk = k aki akj = δij 8 / 90 Some famous Matrices 1. Diagonal if aij = 0 for i 6= j 2. Upper triangular if aij = 0 for i > j 3. Lower triangular if aij = 0 for i < j 4. Upper Hessenberg if aij = 0 for i > j + 1 5. Lower Hessenberg if aij = 0 for i < j + 1 6. Tridiagonal if aij = 0 for ji − jj > 1 7. Lower banded with bandwidth p aij = 0 for i > j + p 8. Upper banded with bandwidth p aij = 0 for i < j + p 9. Banded, block upper triangular, block lower triangular.... 9 / 90 Basic Matrix Features Some Equivalent Statements For an N × N matrix A the following properties are all equivalent 1. If the inverse of A exists, A is nonsingular. 2. The equation Ax = 0 implies x = 0. 3. The rows of A form a basis of RN . 4. The columns of A form a basis of RN . 5. A is a product of elementary matrices. 6. 0 is not eigenvalue of A. 10 / 90 Important Mathematical Operations The basic matrix operations that we will deal with are addition and subtraction A = B ± C =) aij = bij ± cij ; (1) scalar-matrix multiplication A = γB =) aij = γbij ; (2) vector-matrix multiplication n X y = Ax =) yi = aij xj ; (3) j=1 matrix-matrix multiplication n X A = BC =) aij = bik ckj ; (4) k=1 and transposition T A = B =) aij = bji (5) 11 / 90 Important Mathematical Operations Similarly, important vector operations that we will deal with are addition and subtraction x = y ± z =) xi = yi ± zi ; (6) scalar-vector multiplication x = γy =) xi = γyi ; (7) vector-vector multiplication (called Hadamard multiplication) x = yz =) xi = yi zi ; (8) the inner or so-called dot product resulting in a constant n T X x = y z =) x = yj zj ; (9) j=1 and the outer product, which yields a matrix, T A = yz =) aij = yi zj ; (10) 12 / 90 Eigenvalue Solvers Let us consider the matrix A of dimension n. The eigenvalues of A is defined through the matrix equation Ax(ν) = λ(ν)x(ν); where λ(ν) are the eigenvalues and x(ν) the corresponding eigenvectors. Unless otherwise stated, when we use the wording eigenvector we mean the right eigenvector. The left eigenvector is defined as (ν) (ν) (ν) x LA = λ x L The above right eigenvector problem is equivalent to a set of n equations with n unknowns xi . 13 / 90 Eigenvalue Solvers The eigenvalue problem can be rewritten as A − λ(ν)I x(ν) = 0; with I being the unity matrix. This equation provides a solution to the problem if and only if the determinant is zero, namely (ν) A − λ I = 0; which in turn means that the determinant is a polynomial of degree n in λ and in general we will have n distinct zeros. 14 / 90 Eigenvalue Solvers × The eigenvalues of a matrix A 2 Cn n are thus the n roots of its characteristic polynomial P(λ) = det(λI − A); or n Y P(λ) = (λi − λ) : i=1 The set of these roots is called the spectrum and is denoted as λ(A). If λ(A) = fλ1; λ2; : : : ; λng then we have det(A) = λ1λ2 : : : λn; and if we define the trace of A as n X Tr(A) = aii i=1 then Tr(A) = λ1 + λ2 + ··· + λn. 15 / 90 Abel-Ruffini Impossibility Theorem The Abel-Ruffini theorem (also known as Abel’s impossibility theorem) states that there is no general solution in radicals to polynomial equations of degree five or higher. The content of this theorem is frequently misunderstood. It does not assert that higher-degree polynomial equations are unsolvable. In fact, if the polynomial has real or complex coefficients, and we allow complex solutions, then every polynomial equation has solutions; this is the fundamental theorem of algebra. Although these solutions cannot always be computed exactly with radicals, they can be computed to any desired degree of accuracy using numerical methods such as the Newton-Raphson method or Laguerre method, and in this way they are no different from solutions to polynomial equations of the second, third, or fourth degrees. The theorem only concerns the form that such a solution must take. The content of the theorem is that the solution of a higher-degree equation cannot in all cases be expressed in terms of the polynomial coefficients with a finite number of operations of addition, subtraction, multiplication, division and root extraction. Some polynomials of arbitrary degree, of which the simplest nontrivial example is the monomial equation axn = b, are always solvable with a radical. 16 / 90 Abel-Ruffini Impossibility Theorem The Abel-Ruffini theorem says that there are some fifth-degree equations whose solution cannot be so expressed.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    90 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us