Computing Singular Values of Large Matrices with an Inverse-Free Preconditioned Krylov Subspace Method∗
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Singular Value Decomposition (SVD)
San José State University Math 253: Mathematical Methods for Data Visualization Lecture 5: Singular Value Decomposition (SVD) Dr. Guangliang Chen Outline • Matrix SVD Singular Value Decomposition (SVD) Introduction We have seen that symmetric matrices are always (orthogonally) diagonalizable. That is, for any symmetric matrix A ∈ Rn×n, there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . , λn), both real and square, such that A = QΛQT . We have pointed out that λi’s are the eigenvalues of A and qi’s the corresponding eigenvectors (which are orthogonal to each other and have unit norm). Thus, such a factorization is called the eigendecomposition of A, also called the spectral decomposition of A. What about general rectangular matrices? Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/22 Singular Value Decomposition (SVD) Existence of the SVD for general matrices Theorem: For any matrix X ∈ Rn×d, there exist two orthogonal matrices U ∈ Rn×n, V ∈ Rd×d and a nonnegative, “diagonal” matrix Σ ∈ Rn×d (of the same size as X) such that T Xn×d = Un×nΣn×dVd×d. Remark. This is called the Singular Value Decomposition (SVD) of X: • The diagonals of Σ are called the singular values of X (often sorted in decreasing order). • The columns of U are called the left singular vectors of X. • The columns of V are called the right singular vectors of X. Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/22 Singular Value Decomposition (SVD) * * b * b (n>d) b b b * b = * * = b b b * (n<d) * b * * b b Dr. -
Eigen Values and Vectors Matrices and Eigen Vectors
EIGEN VALUES AND VECTORS MATRICES AND EIGEN VECTORS 2 3 1 11 × = [2 1] [3] [ 5 ] 2 3 3 12 3 × = = 4 × [2 1] [2] [ 8 ] [2] • Scale 3 6 2 × = [2] [4] 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 2 EIGEN VECTOR - PROPERTIES • Eigen vectors can only be found for square matrices • Not every square matrix has eigen vectors. • Given an n x n matrix that does have eigenvectors, there are n of them for example, given a 3 x 3 matrix, there are 3 eigenvectors. • Even if we scale the vector by some amount, we still get the same multiple 3 EIGEN VECTOR - PROPERTIES • Even if we scale the vector by some amount, we still get the same multiple • Because all you’re doing is making it longer, not changing its direction. • All the eigenvectors of a matrix are perpendicular or orthogonal. • This means you can express the data in terms of these perpendicular eigenvectors. • Also, when we find eigenvectors we usually normalize them to length one. 4 EIGEN VALUES - PROPERTIES • Eigenvalues are closely related to eigenvectors. • These scale the eigenvectors • eigenvalues and eigenvectors always come in pairs. 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 5 SPECTRAL THEOREM Theorem: If A ∈ ℝm×n is symmetric matrix (meaning AT = A), then, there exist real numbers (the eigenvalues) λ1, …, λn and orthogonal, non-zero real vectors ϕ1, ϕ2, …, ϕn (the eigenvectors) such that for each i = 1,2,…, n : Aϕi = λiϕi 6 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ 7 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ ⟹ Aϕ − λIϕ = 0 (A − λI)ϕ = 0 30 − λ 28 = 0 ⟹ λ = 58 and -
Accelerating the LOBPCG Method on Gpus Using a Blocked Sparse Matrix Vector Product
Accelerating the LOBPCG method on GPUs using a blocked Sparse Matrix Vector Product Hartwig Anzt and Stanimire Tomov and Jack Dongarra Innovative Computing Lab University of Tennessee Knoxville, USA Email: [email protected], [email protected], [email protected] Abstract— the computing power of today’s supercomputers, often accel- erated by coprocessors like graphics processing units (GPUs), This paper presents a heterogeneous CPU-GPU algorithm design and optimized implementation for an entire sparse iter- becomes challenging. ative eigensolver – the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) – starting from low-level GPU While there exist numerous efforts to adapt iterative lin- data structures and kernels to the higher-level algorithmic choices ear solvers like Krylov subspace methods to coprocessor and overall heterogeneous design. Most notably, the eigensolver technology, sparse eigensolvers have so far remained out- leverages the high-performance of a new GPU kernel developed side the main focus. A possible explanation is that many for the simultaneous multiplication of a sparse matrix and a of those combine sparse and dense linear algebra routines, set of vectors (SpMM). This is a building block that serves which makes porting them to accelerators more difficult. Aside as a backbone for not only block-Krylov, but also for other from the power method, algorithms based on the Krylov methods relying on blocking for acceleration in general. The subspace idea are among the most commonly used general heterogeneous LOBPCG developed here reveals the potential of eigensolvers [1]. When targeting symmetric positive definite this type of eigensolver by highly optimizing all of its components, eigenvalue problems, the recently developed Locally Optimal and can be viewed as a benchmark for other SpMM-dependent applications. -
Singular Value Decomposition (SVD) 2 1.1 Singular Vectors
Contents 1 Singular Value Decomposition (SVD) 2 1.1 Singular Vectors . .3 1.2 Singular Value Decomposition (SVD) . .7 1.3 Best Rank k Approximations . .8 1.4 Power Method for Computing the Singular Value Decomposition . 11 1.5 Applications of Singular Value Decomposition . 16 1.5.1 Principal Component Analysis . 16 1.5.2 Clustering a Mixture of Spherical Gaussians . 16 1.5.3 An Application of SVD to a Discrete Optimization Problem . 22 1.5.4 SVD as a Compression Algorithm . 24 1.5.5 Spectral Decomposition . 24 1.5.6 Singular Vectors and ranking documents . 25 1.6 Bibliographic Notes . 27 1.7 Exercises . 28 1 1 Singular Value Decomposition (SVD) The singular value decomposition of a matrix A is the factorization of A into the product of three matrices A = UDV T where the columns of U and V are orthonormal and the matrix D is diagonal with positive real entries. The SVD is useful in many tasks. Here we mention some examples. First, in many applications, the data matrix A is close to a matrix of low rank and it is useful to find a low rank matrix which is a good approximation to the data matrix . We will show that from the singular value decomposition of A, we can get the matrix B of rank k which best approximates A; in fact we can do this for every k. Also, singular value decomposition is defined for all matrices (rectangular or square) unlike the more commonly used spectral decomposition in Linear Algebra. The reader familiar with eigenvectors and eigenvalues (we do not assume familiarity here) will also realize that we need conditions on the matrix to ensure orthogonality of eigenvectors. -
A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231. -
Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions
Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions Xin Liu∗ Zaiwen Weny Yin Zhangz March 22, 2012 Abstract In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value de- compositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms. Keywords. subspace optimization, dominant singular value decomposition, Krylov subspace, eigen- value decomposition 1 Introduction Singular value decomposition (SVD) is a fundamental and enormously useful tool in matrix computations, such as determining the pseudo-inverse, the range or null space, or the rank of a matrix, solving regular or total least squares data fitting problems, or computing low-rank approximations to a matrix, just to mention a few. The need for computing SVDs also frequently arises from diverse fields in statistics, signal processing, data mining or compression, and from various dimension-reduction models of large-scale dynamic systems. Usually, instead of acquiring all the singular values and vectors of a matrix, it suffices to compute a set of dominant (i.e., the largest) singular values and their corresponding singular vectors in order to obtain the most valuable and relevant information about the underlying dataset or system. -
Solving Symmetric Semi-Definite (Ill-Conditioned)
Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016 Symmetric definite generalized eigenvalue problem I Symmetric definite generalized eigenvalue problem Ax = λBx where AT = A and BT = B > 0 I Eigen-decomposition AX = BXΛ where Λ = diag(λ1; λ2; : : : ; λn) X = (x1; x2; : : : ; xn) XT BX = I: I Assume λ1 ≤ λ2 ≤ · · · ≤ λn LAPACK solvers I LAPACK routines xSYGV, xSYGVD, xSYGVX are based on the following algorithm (Wilkinson'65): 1. compute the Cholesky factorization B = GGT 2. compute C = G−1AG−T 3. compute symmetric eigen-decomposition QT CQ = Λ 4. set X = G−T Q I xSYGV[D,X] could be numerically unstable if B is ill-conditioned: −1 jλbi − λij . p(n)(kB k2kAk2 + cond(B)jλbij) · and −1 1=2 kB k2kAk2(cond(B)) + cond(B)jλbij θ(xbi; xi) . p(n) · specgapi I User's choice between the inversion of ill-conditioned Cholesky decomposition and the QZ algorithm that destroys symmetry Algorithms to address the ill-conditioning 1. Fix-Heiberger'72 (Parlett'80): explicit reduction 2. Chang-Chung Chang'74: SQZ method (QZ by Moler and Stewart'73) 3. Bunse-Gerstner'84: MDR method 4. Chandrasekaran'00: \proper pivoting scheme" 5. Davies-Higham-Tisseur'01: Cholesky+Jacobi 6. Working notes by Kahan'11 and Moler'14 This talk Three approaches: 1. A LAPACK-style implementation of Fix-Heiberger algorithm 2. An algebraic reformulation 3. Locally accelerated block preconditioned steepest descent (LABPSD) This talk Three approaches: 1. A LAPACK-style implementation of Fix-Heiberger algorithm Status: beta-release 2. -
On the Performance and Energy Efficiency of Sparse Linear Algebra on Gpus
Original Article The International Journal of High Performance Computing Applications 1–16 On the performance and energy efficiency Ó The Author(s) 2016 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav of sparse linear algebra on GPUs DOI: 10.1177/1094342016672081 hpc.sagepub.com Hartwig Anzt1, Stanimire Tomov1 and Jack Dongarra1,2,3 Abstract In this paper we unveil some performance and energy efficiency frontiers for sparse computations on GPU-based super- computers. We compare the resource efficiency of different sparse matrix–vector products (SpMV) taken from libraries such as cuSPARSE and MAGMA for GPU and Intel’s MKL for multicore CPUs, and develop a GPU sparse matrix–matrix product (SpMM) implementation that handles the simultaneous multiplication of a sparse matrix with a set of vectors in block-wise fashion. While a typical sparse computation such as the SpMV reaches only a fraction of the peak of current GPUs, we show that the SpMM succeeds in exceeding the memory-bound limitations of the SpMV. We integrate this kernel into a GPU-accelerated Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) eigensolver. LOBPCG is chosen as a benchmark algorithm for this study as it combines an interesting mix of sparse and dense linear algebra operations that is typical for complex simulation applications, and allows for hardware-aware optimizations. In a detailed analysis we compare the performance and energy efficiency against a multi-threaded CPU counterpart. The reported performance and energy efficiency results are indicative of sparse computations on supercomputers. Keywords sparse eigensolver, LOBPCG, GPU supercomputer, energy efficiency, blocked sparse matrix–vector product 1Introduction GPU-based supercomputers. -
Chapter 6 the Singular Value Decomposition Ax=B Version of 11 April 2019
Matrix Methods for Computational Modeling and Data Analytics Virginia Tech Spring 2019 · Mark Embree [email protected] Chapter 6 The Singular Value Decomposition Ax=b version of 11 April 2019 The singular value decomposition (SVD) is among the most important and widely applicable matrix factorizations. It provides a natural way to untangle a matrix into its four fundamental subspaces, and reveals the relative importance of each direction within those subspaces. Thus the singular value decomposition is a vital tool for analyzing data, and it provides a slick way to understand (and prove) many fundamental results in matrix theory. It is the perfect tool for solving least squares problems, and provides the best way to approximate a matrix with one of lower rank. These notes construct the SVD in various forms, then describe a few of its most compelling applications. 6.1 Eigenvalues and eigenvectors of symmetric matrices To derive the singular value decomposition of a general (rectangu- lar) matrix A IR m n, we shall rely on several special properties of 2 ⇥ the square, symmetric matrix ATA. While this course assumes you are well acquainted with eigenvalues and eigenvectors, we will re- call some fundamental concepts, especially pertaining to symmetric matrices. 6.1.1 A passing nod to complex numbers Recall that even if a matrix has real number entries, it could have eigenvalues that are complex numbers; the corresponding eigenvec- tors will also have complex entries. Consider, for example, the matrix 0 1 S = − . " 10# 73 To find the eigenvalues of S, form the characteristic polynomial l 1 det(lI S)=det = l2 + 1. -
Lecture 15: July 22, 2013 1 Expander Mixing Lemma 2 Singular Value
REU 2013: Apprentice Program Summer 2013 Lecture 15: July 22, 2013 Madhur Tulsiani Scribe: David Kim 1 Expander Mixing Lemma Let G = (V; E) be d-regular, such that its adjacency matrix A has eigenvalues µ1 = d ≥ µ2 ≥ · · · ≥ µn. Let S; T ⊆ V , not necessarily disjoint, and E(S; T ) = fs 2 S; t 2 T : fs; tg 2 Eg. Question: How close is jE(S; T )j to what is \expected"? Exercise 1.1 Given G as above, if one picks a random pair i; j 2 V (G), what is P[i; j are adjacent]? dn #good cases #edges 2 d Answer: P[fi; jg 2 E] = = n = n = #all cases 2 2 n − 1 Since for S; T ⊆ V , the number of possible pairs of vertices from S and T is jSj · jT j, we \expect" d jSj · jT j · n−1 out of those pairs to have edges. d p Exercise 1.2 * jjE(S; T )j − jSj · jT j · n j ≤ µ jSj · jT j Note that the smaller the value of µ, the closer jE(S; T )j is to what is \expected", hence expander graphs are sometimes called pseudorandom graphs for µ small. We will now show a generalization of the Spectral Theorem, with a generalized notion of eigenvalues to matrices that are not necessarily square. 2 Singular Value Decomposition n×n ∗ ∗ Recall that by the Spectral Theorem, given a normal matrix A 2 C (A A = AA ), we have n ∗ X ∗ A = UDU = λiuiui i=1 where U is unitary, D is diagonal, and we have a decomposition of A into a nice orthonormal basis, m×n consisting of the columns of U. -
High Efficiency Spectral Analysis and BLAS-3
High Efficiency Spectral Analysis and BLAS-3 Randomized QRCP with Low-Rank Approximations by Jed Alma Duersch A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Applied Mathematics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Ming Gu, Chair Assistant Professor Lin Lin Professor Kameshwar Poolla Fall 2015 High Efficiency Spectral Analysis and BLAS-3 Randomized QRCP with Low-Rank Approximations Copyright 2015 by Jed Alma Duersch i Contents Contents i 1 A Robust and Efficient Implementation of LOBPCG 2 1.1 Contributions and results . 2 1.2 Introduction to LOBPCG . 3 1.3 The basic LOBPCG algorithm . 4 1.4 Numerical stability and basis selection . 6 1.5 Stability improvements . 10 1.6 Efficiency improvements . 13 1.7 Analysis . 22 1.8 Numerical examples . 25 2 Spectral Target Residual Descent 32 2.1 Contributions and results . 32 2.2 Introduction to interior eigenvalue targeting . 33 2.3 Spectral targeting analysis . 34 2.4 Notes on generalized spectral targeting . 42 2.5 Numerical experiments . 48 3 True BLAS-3 Performance QRCP using Random Sampling 53 3.1 Contributions and results . 53 3.2 Introduction to QRCP . 54 3.3 Sample QRCP . 58 3.4 Sample updates . 64 3.5 Full randomized QRCP . 66 3.6 Parallel implementation notes . 71 3.7 Truncated approximate Singular Value Decomposition . 73 3.8 Experimental performance . 77 ii Bibliography 93 iii Acknowledgments I gratefully thank my coauthors, Dr. Meiyue Shao and Dr. Chao Yang, for their time and expertise. I would also like to thank Assistant Professor Lin Lin, Dr. -
Singular Value Decomposition
Math 312, Spring 2014 Jerry L. Kazdan Singular Value Decomposition Positive Definite Matrices Let C be an n × n positive definite (symmetric) matrix and consider the quadratic polynomial hx;Cxi. How can we understand what this\looks like"? One useful approach is to view the image 2 2 of the unit sphere, that is, the points that satisfy kxk = 1. For instance, if hx;Cxi = 4x1 + 9x2 , 2 2 then the image of the unit disk, x1 + x2 = 1, is an ellipse. For the more general case of hx;Cxi it is fundamental to use coordinates that are adapted to the matrix C . Since C is a symmetric matrix, there are orthonormal vectors v1 , v2 ,. , vn in which C is a diagonal matrix. These vectors vj are eigenvectors of C with corresponding eigenvalues λj : so Cvj = λjvj . In these coordinates, say x = y1v1 + y2v2 + ··· + ynvn: (1) and Cx = λ1y1v1 + λ2y2v2 + ··· + λnynvn: (2) Then, using the orthonormality of the vj , 2 2 2 2 kxk = hx; xi = y1 + y2 + ··· + yn and 2 2 2 hx;Cxi = λ1y1 + λ2y2 + ··· + λnyn: (3) For use in the singular value decomposition below, it is con- ventional to number the eigenvalues in decreasing order: λmax = λ1 ≥ λ2 ≥ · · · ≥ λn = λmin; so on the unit sphere kxk = 1 λmin ≤ hx;Cxi ≤ λmax: As in the figure, the image of the unit sphere is an \ellipsoid" whose axes are in the direction of the eigenvectors and the corresponding eigenvalues give half the length of the axes. There are two closely related approaches to using that a self-adjoint matrix C can be orthogonally diagonalized.