AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems
Total Page:16
File Type:pdf, Size:1020Kb
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 34 Outline 1 Computing the SVD (NLA§31) 2 Sparse Storage Format 3 Direct Methods for Sparse Linear Systems (MC§11.1-11.2) 4 Overview of Iterative Methods for Sparse Linear Systems Xiangmin Jiao Numerical Analysis I 2 / 34 SVD of A and Eigenvalues of A∗A m×n Intuitive idea for computing SVD of A 2 R : ∗ ∗ ∗ I Form A Apand computep its eigenvaluep p decomposition A A = V ΛV I Let Σ = Λ, i.e., diag( λ1; λ2;:::; λn) I Solve system UΣ = AV to obtain U This method is efficient if m n. However, it may not be stable, especially for smaller singular values because of the squaring of the condition number I For SVD of A, jσ~k − σk j = O(machinekAk), where σ~k and σk denote the computed and exact kth singular value ∗ I If computed from eigenvalue decomposition of A A, 2 jσ~k − σk j = O(machinekAk /σk ), which is problematic if σk kAk If one is interested in only relatively large singular values, then computing eigenvalues of A∗A is not a problem. For general situations, a more stable algorithm is desired. Xiangmin Jiao Numerical Analysis I 3 / 34 A Different Reduction to Eigenvalue Problem Typical algorithm for computing SVD are similar to computation of eigenvalues 0 A∗ Consider A 2 m×n, then Hermitian matrix H = has C A 0 eigenvalue decomposition VV VV Σ 0 H = ; U −U U −U 0 −Σ where A = UΣV ∗ gives the SVD. This approach is stable. In practice, such a reduction is done implicitly without forming the large matrix Typically done in two phases Xiangmin Jiao Numerical Analysis I 4 / 34 Two-Phase Method In the first phase, reduce to bidiagonal form by applying different orthogonal transformations on left and right, which involves O(mn2) operations In the second phase, reduce to diagonal form using a variant of QR algorithm or divide-and-conquer algorithm, which involves O(n2) operations for fixed precision We hereafter focus on the first phase Xiangmin Jiao Numerical Analysis I 5 / 34 Golub-Kahan Bidiagonalization Apply Householder reflectors on both left and right sides 2 4 3 Work for Golub-Kahan bidiagonalization ∼ 4mn − 3 n flops Xiangmin Jiao Numerical Analysis I 6 / 34 Lawson-Hanson-Chan Bidiagonalization Speed up by first performing QR factorization on A Work for LHC bidiagonalization ∼ 2mn2 + 2n3 flops, which is 5 advantageous if m ≥ 3 n Xiangmin Jiao Numerical Analysis I 7 / 34 Three-Step Bidiagonalization Hybrid approach: Apply QR at suitable time on submatrix with 5=3 aspect ratio 2 4 3 2 3 Work for three-step bidiagonalization is ∼ 4mn − 3 n − 3 (m − n) Xiangmin Jiao Numerical Analysis I 8 / 34 Comparison of Performance ) (Golub-Kahan One-step ) (LHC Two-step Three-step bidiagonalization Xiangmin Jiao Numerical Analysis I 9 / 34 Outline 1 Computing the SVD (NLA§31) 2 Sparse Storage Format 3 Direct Methods for Sparse Linear Systems (MC§11.1-11.2) 4 Overview of Iterative Methods for Sparse Linear Systems Xiangmin Jiao Numerical Analysis I 10 / 34 Sparse Linear System Boundary value problems and implicit methods for time-dependent PDEs yield systems of linear algebraic equations to solve A matrix is sparse if it has relatively few nonzeros in its entries Sparsity can be exploited to use far less than O(n2) storage and O(n3) work required in standard approach to solving system with dense matrix, assuming matrix is n × n Xiangmin Jiao Numerical Analysis I 11 / 34 Storage Format of Sparse Matrices Sparse-matrices are typically stored in special formats that store only nonzero entries, along with indices to identify their locations in matrix, such as I compressed-row storage (CRS) I compressed-column storage (CCS) I block compressed row storage (BCRS) Banded matrices have their own special storage formats (such as Compressed Diagonal Storage (CDS)) See survey at http://netlib.org/linalg/html_templates/node90.html Explicitly storing indices incurs additional storage overhead and makes arithmetic operations on nonzeros less efficient due to indirect addressing to access operands, so they are beneficial only for very sparse matrices Storage format can have big impact the effectiveness of different versions of same algorithm (with different ordering of loops) Besides direct methods, these storage formats are also important in implementing iterative and multigrid solvers Xiangmin Jiao Numerical Analysis I 12 / 34 Example of Compressed-Row Storage (CRS) Xiangmin Jiao Numerical Analysis I 13 / 34 Outline 1 Computing the SVD (NLA§31) 2 Sparse Storage Format 3 Direct Methods for Sparse Linear Systems (MC§11.1-11.2) 4 Overview of Iterative Methods for Sparse Linear Systems Xiangmin Jiao Numerical Analysis I 14 / 34 Banded Linear Systems Cost of factorizing banded linear system depends on bandwidth I For SPD n × n matrix with semi-bandwidth s, total flop count of Cholesky factorization is about ns2 I For n × n matrix with lower bandwidth p and upper bandwidth q, F In A = LU (LU without pivoting), total flop count is about 2npq F In PA = LU (LU with column pivoting), total flop count is about 2np(p + q) Banded matrices have their own special storage formats (such as Compressed Diagonal Storage (CDS)) Xiangmin Jiao Numerical Analysis I 15 / 34 Fill When applying LU or Cholesky factorization to general sparse matrix, taking linear combinations of rows or columns to annihilate unwanted nonzero entries can introduce new nonzeros into matrix locations that were initially zero Such new nonzeros, called fill or fill-in, must be stored and may themselves eventually need to be annihilated in order to obtain triangular factors Resulting triangular factors can be expected to contain at least as many nonzeros as original matrix and usually significant fill as well Xiangmin Jiao Numerical Analysis I 16 / 34 Sparse Cholesky Factorization In general, some heuristic algorithms are employed to reorder the matrix to reduce fills Amount of fill is sensitive to order in which rows and columns of matrix are processed, so basic problem in sparse factorization is reordering matrix to limit fill during factorization Exact minimization of fill is hard combinatorial problem (NP-complete), but heuristic algorithms such as minimum degree and nested dissection limit fill well for many types of problems For Cholesky factorization, both rows and columns are reordered Xiangmin Jiao Numerical Analysis I 17 / 34 Graph Model of Elimination Each step of factorization process corresponds to elimination of one node from graph Eliminating node causes its neighboring nodes to become connected to each other If any such neighbors were not already connected, then fill results (new edges in graph and new nonzeros in matrix) Commonly used reordering methods include Cuthill-McKee, approximate minimum degree ordering (AMD) and nested dissection Xiangmin Jiao Numerical Analysis I 18 / 34 Reordering to Reduce Bandwidth The Cuthill-McKee algorithm and reverse Cuthill-McKee algorithm I The Cuthill-McKee algorithm is a variant of the breadth-first search algorithm on graphs. F Starts with a peripheral node F Generates levels Ri for i = 1; 2;::: until all nodes are exhausted F The set Ri+1 is created from set Ri by listing all vertices adjacent to all nodes in Ri F Within each level, nodes are listed in increasing degree I The reverse Cuthill–McKee algorithm (RCM) reserves the resulting index numbers Xiangmin Jiao Numerical Analysis I 19 / 34 Approximate Minimum Degree Ordering Good heuristic for limiting fill is to eliminate first those nodes having fewest neighbors Number of neighbors is called degree of node, so heuristic is known as minimum degree At each step, select node of smallest degree for elimination, breaking ties arbitrarily After node has been eliminated, its neighbors become connected to each other, so degrees of some nodes may change Process is then repeated, with new node of minimum degree eliminated next, and so on until all nodes have been eliminated Xiangmin Jiao Numerical Analysis I 20 / 34 Minimum Degree Ordering, continued Cholesky factor suffers much less fill than with original ordering, and advantage grows with problem size Sophisticated versions of minimum degree are among most effective general-purpose orderings known Xiangmin Jiao Numerical Analysis I 21 / 34 Comparison of Different Orderings of Example Matrix Left: Nonzero pattern of matrix A. Right: Nonzero pattern of matrix R. Xiangmin Jiao Numerical Analysis I 22 / 34 Nested Dissection Ordering Nested dissection is based on divide-and-conquer First, small set of nodes is selected whose removal splits graph into two pieces of roughly equal size No node in either piece is connected to any node in other, so no fill occurs in either piece due to elimination of any node in the other Separator nodes are numbered last, then process is repeated recursively on each remaining piece of graph until all nodes have been numbered Xiangmin Jiao Numerical Analysis I 23 / 34 Nested Dissection Ordering Continued Dissection induces blocks of zeros in matrix that are automatically preserved during factorization Recursive nature of algorithm can be seen in hierarchical block structure of matrix, which would involve many more levels in larger problems Again, Cholesky factor suffers much less fill than with original ordering, and advantage grows with problem size Xiangmin Jiao Numerical Analysis I 24 / 34 Sparse Gaussian Elimination For Gaussian elimination, only columns are reordered Pivoting introduces additional fills in sparse Gaussian elimination Reordering may be done dynamically or statically The reverse Cuthill-McKee algorithm applied to A + AT may be used to reduce bandwidth Column approximate minimum-degree, may be employed to reorder matrix to reduce fills Xiangmin Jiao Numerical Analysis I 25 / 34 Comparison of Different Orderings of Example Matrix Nonzero pattern of A and L + U with random ordering.