Current Trends in Numerical Linear Algebra: from Theory to Practice

Total Page:16

File Type:pdf, Size:1020Kb

Current Trends in Numerical Linear Algebra: from Theory to Practice SOFSEM ’94 – Milovy, The Czech Republic, 27. 11. – 9. 12. 1994 Current Trends in Numerical Linear Algebra: From Theory to Practice Z. Strakoˇs, M. T˚uma Institute of Computer Science, Academy of Sciences of the Czech Republic Pod vod´arenskou vˇeˇz´ı2, CZ–182 07 Prague, The Czech Republic Email: [email protected] [email protected] Abstract: Our goal is to show on several examples the great progress made in numerical analysis in the past decades together with the principal problems and relations to other disciplines. We restrict ourselves to numerical linear algebra, or, more specifically, to solving Ax = b where A is a real nonsingular n by n matrix and b a real n dimensional vector, and to computing eigenvalues of a sparse matrix A. We discuss recent− developments in both sparse direct and iterative solvers, as well as fundamental problems in computing eigenvalues. The effects of parallel architectures to the choice of the method and to the implementation of codes are stressed throughout the contribution. Keywords: numerical analysis, numerical linear algebra, linear algebraic systems, sparse direct solvers, iterative methods, eigenvalue computations, parallel architectures and algorithms 1 Motivation modelling and extensive use of the numerical analysis, mentioning just one important area Our contribution deals with numerical analy- of application. sis. What is numerical analysis? And what Nick Trefethen proposes in his unpublished is its relation to computer science (comput- note [38] the following definition: ing sciences)? It is hard to give a definition. Sometimes “pure” mathematicians do not con- Numerical analysis is the study of sider numerical analysis as a part of mathe- algorithms for mathematical prob- matics and “pure” computer scientists do not lems involving continuous variables consider it as a part of computer science. Nev- ertheless, hardly anyone would argue against The keywords are algorithm and continuous, the importance of this field. Dramatic devel- the last typically means real or complex. Since opment in high technology in the last decades real or complex numbers cannot be represented would not be possible without mathematical exactly on computers, the part of the business 2. Solving large sparse linear algebraic systems of numerical analysis must be to study round- the numerical method to the efficient and reli- ing errors. To understand finite algorithms or able code. We demonstrate that especially on direct methods, e.g. Gaussian elimination or the current trends in developing sparse direct Cholesky factorization for solving systems of solvers. linear algebraic equation, one have to under- stand the computer architectures, operation 2 Solving large sparse linear alge- counts and the propagation of rounding errors. braic systems This example, however, does not tell you all Although the basic scheme of the symmetric the story. Most mathematical problems involv- Gaussian elimination is very simple and can ing continuous variables cannot be solved (or be casted in a few lines, the effective algo- effectively solved) by finite algorithms. A clas- rithms which can be used for the really large sical example - there are no finite algorithms problems usually take from many hundreds for matrix eigenvalue problems (the same con- to thousands of code statements. The differ- clusion can be extended to almost anything ence in the timings can then be many orders nonlinear). Therefore the deeper business of of magnitude. This reveals the real complex- numerical analysis is approximating unknown ity of the intricate codes which are necessary quantities that cannot be known exactly even in to cope with the large real-world problems. principle. A part of it is, of course, estimating Subsection 2.1 is devoted to the sparse direct the precision of the computed approximation. solvers stemming from this symmetric Gaus- Our goal is to show on several examples the sian elimination. Iterative solvers, which are great achievements of the numerical analysis, based on the approaching the solution step by together with the principal problems and rela- step from some initial chosen approximation, tions to other disciplines. We restrict ourselves are discussed in subsection 2.2. A comparison to numerical linear algebra, or more specifi- of both approaches is given in subsection 2.3. cally, to solving Ax = b, where A is a real nonsingular n by n matrix and b a real n- 2.1 Sparse direct solvers dimensional vector (for simplicity we restrict This section provides a brief description of the ourselves to real systems; many statements ap- basic ideas concerning sparsity in solving large ply, of course, also to the complex case), and linear systems by direct methods. Solving the to computing eigenvalues of a square matrix A. large sparse linear systems is the bottleneck in Much of scientific computing depends on these a wide range of engineering and scientific com- two highly developed subjects (or on closely putations. We restrict to the symmetric and related ones). This restriction allows us to go positive definite case where most of the impor- deep and show things in a sharp light (well, at tant ideas about algorithms, data structures least in principle; when judging this contribu- and computing facilities can be explained. In tion you must take into account our - certainly this case, the solution process is inherently sta- limited - expertise and exposition capability). ble and we can thus avoid numerical pivoting We emphasize two main ideas which can be which would complicate the description other- found behind our exposition and which are es- wise. sential for the recent trends and developments Efficient solution of large sparse linear sys- in numerical linear algebra. First, our strong tems needs a careful choice of the algorith- belief is that any “software solution” without mic strategy influenced by the characteristics → a deep understanding of mathematical (physi- of computer architectures (CPU speed, mem- cal, technical, ...) background of the problem ory hierarchy, bandwidths cache/main mem- is very dangerous and may lead to fatal er- ory and main memory/auxiliary storage). The rors. This is illustrated, e.g., on the prob- knowledge of the most important architectural lem of computing eigenvalues and on charac- features is necessary to make our computations terizing the convergence of the iterative meth- really efficient. ods. Second, there is always a long way (with First we provide a brief description of the many unexpectably complicated problems) from Cholesky factorization method for solving of a 2. Solving large sparse linear algebraic systems sparse linear system. This is the core of the the column j by the column k at the lines (3)– symmetric Gaussian elimination. Basic nota- (5) is denoted by cmod(j, k). Vector scaling at tion can be found, e.g., in [2], [18]. the lines (8)–(10) is denoted by cdiv(j). We use the square root formulation of the The right-looking approach can be de- factorization in the form scribed by the following pseudo-code: (1) for k = 1 to n A = LDLT , (2) dk = ak where L is lower triangular matrix and D is (3) for i = k + 1 to n if aik = 0 6 diagonal matrix. Having L, solution x can be (4) lik = aik/dk computed using two back substitutions and one (5) end i diagonal scaling: (6) for j = k + 1 to n if akj = 0 6 1 T (7) for i = k + 1 to n if lik = 0 Ly¯ = b ; y = D− y¯ ; L x = y. 6 (8) aij = aij likakj − Two primary approaches to factorization (9) end i are as follows (we do not mention the row- (10) end j Cholesky approach since its algorithmic prop- (11) end k erties usually do no fit well with the modern computer architectures). The left-looking approach can be described In the right-looking approach, once a col- by the following pseudo-code: umn k is completed, it immediately generates (1) for j = 1 to n all contributions to the subsequent columns, i.e., columns to the right of it in the matrix. (2) for k = 1 to j 1 if akj = 0 − 6 A number of approaches have been taken in (3) for i = k + 1 to n if lik = 0 6 order to solve the problem of matching nonze- (4) aij = aij likakj − (5) end i ros from columns j and k at the lines (6)-(10) (6) end k of the pseudo-code as will be mentioned later. Similarly as above, operation at the lines (3)– (7) dj = ajj (8) for i = k + 1 to n (5) we denote cdiv(k) and column modification at the lines (7)–(9) is denoted by cmod(j, k). (9) lij = aij/ajj (10) end i Discretized operators from most of the ap- (11) end j plications such as structural analysis, compu- tational fluid dynamics, device and process In this case, a column j of L is computed by simulation and electric power network prob- gathering all contributions from the previously lems contain only a fraction of nonzero ele- computed columns (i.e. the columns to the left ments: these matrices are very sparse. Explicit of the column j in the matrix) to the column consideration of sparsity leads to substantial j. Since the loop at the lines (3)–(5) in this savings in space and computational time. Usu- pseudo-code involves two columns, j and k, ally, the savings in time are more important, with potentially different nonzero structures, since the time complexity grows quicker as a the problem of matching corresponding nonze- function of the problem size than the space ros must be resolved. Vector modification of complexity (memory requirements). ∗ ∗ ∗∗∗∗∗ ∗ ∗ ¯ ∗ ∗ A = A = ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗∗∗∗∗ ∗ ∗ Figure 1.1: The arrow matrix and in the natural and reverse ordering. 2. Solving large sparse linear algebraic systems For a dense problem we have CPU time the diagonal.
Recommended publications
  • 3.1 Ducom-COM3
    Addis Ababa University Addis Ababa Institute of Technology School of Electrical and Computer Engineering Performance Improvement of Multi-scale Modeling of Concrete Performance Solver Using Hybrid CPU-GPU System A Thesis Submitted to the School of Electrical and Computer Engineering In Partial Fulfillment of the Requirements for the Degree of Master of Science In Computer Engineering By: Lina Felleke Advisor: Fitsum Assamnew Co-Advisor: Dr. Esayas G/Yohannes April 2015 Addis Ababa University Addis Ababa Institute of Technology School of Electrical and Computer Engineering Performance Improvement of Multi-scale Modeling of Concrete Performance Solver Using Hybrid CPU-GPU System By: Lina Felleke Advisor: Fitsum Assamnew Co-Advisor: Dr. Esayas G/Yohannes April 2015 Addis Ababa University Addis Ababa Institute of Technology School of Electrical and Computer Engineering Performance Improvement of Multi-scale Modeling of Concrete Performance Solver Using Hybrid CPU-GPU System By Lina Felleke Approval by Board of Examiners Dr.Yalemzewd Negash Dean, School of Electrical Signature And Computer Engineering Fitsum Assamnew Advisor Signature Dr. Esayas_G/Yohannes Co-Advisor Signature Dr. Sirinavas Nune ________________ External Examiner Signature Menore Tekeba _________________ Internal Examiner Signature i Declaration I, the undersigned, declare that this thesis work, to the best of my knowledge and belief, is my original work, has not been presented for a degree in this or any other universities, and all sources of materials used for the thesis work have been fully acknowledged. Name: LinaFelleke Signature: _________________ Place: Addis Ababa Date of submission: This thesis has been submitted for examination with my approval as a university advisor. Fitsum Assamnew Signature: ___________________ Advisor Name Dr.
    [Show full text]
  • Sparse Days September 6Th – 8Th, 2017 CERFACS, Toulouse, France
    Sparse Days September 6th – 8th, 2017 CERFACS, Toulouse, France Sparse Days programme Wednesday, September 6th 2017 13:00 – 14:00 Registration 14:00 – 14:15 Welcome message Session 1 (14:15 – 15:30) – Chair: Iain Duff 14:15 – 15:40 Block Low-Rank multifrontal solvers: complexity, performance, and scalability Theo Mary (IRIT) 15:40 – 16:05 Sparse Supernodal Solver exploiting Low-Rankness Property Grégoire Pichon (Inria Bordeaux – Sud-Ouest) 15:05 – 15:30 Recursive inverse factorization Anton Artemov (Uppsala University) Coffee break (15:30 – 16:00) Session 2 (16:00 – 17:40) – Chair: Daniel Ruiz 16:00 – 16:25 Algorithm based on spectral analysis to detect numerical blocks in matrices Luce le Gorrec (IRIT – APO) 16:25 – 16:50 Computing Sparse Tensor Decompositions using Dimension Trees Oguz Kaya (ENS Lyon) 16:50 – 17:15 Numerical analysis of dynamic centrality Philip Knight (University of Strathclyde) 17:15 – 17:40 SuiteSparse:GraphBLAS: graph algorithms via sparse matrix operations on semirings Tim Davis (Texas A&M University) Thursday, September 7th 2017 Session 3 (09:45 – 11:00) – Chair: Tim Davis 09:45 – 10:10 Parallel Algorithm Design via Approximation Alex Pothen (Purdue University) 10:10 – 10:35 Accelerating the Mondriaan sparse matrix partitioner Rob Bisseling (Utrecht University) 10:35 – 11:00 LS-GPart: a global, distributed ordering library François-Henry Rouet (Livermore Software Technology Corporation) Coffee break (11:00 – 11:25) Session 4 (11:25 – 13:05) – Chair: Cleve Ashcraft 11:25 – 11:50 A Tale of Two Codes Robert Lucas
    [Show full text]
  • A Block Sparse Shared-Memory Multifrontal Finite Element Solver For
    Computer Assisted Mechanics and Engineering Sciences, 16 : 117–131, 2009. Copyright c 2009 by Institute of Fundamental Technological Research, Polish Academy of Sciences A block sparse shared-memory multifrontal finite element solver for problems of structural mechanics S. Y. Fialko Institute of Computer Modeling, Cracow University of Technology ul. Warszawska 24, 31-155 Cracow, Poland Software Company SCAD Soft, Ukraine (Received in the final form July 17, 2009) The presented method is used in finite-element analysis software developed for multicore and multipro- cessor shared-memory computers, or it can be used on single-processor personal computers under the operating systems Windows 2000, Windows XP, or Windows Vista, widely popular in small or medium- sized design offices. The method has the following peculiar features: it works with any ordering; it uses an object-oriented approach on which a dynamic, highly memory-efficient algorithm is based; it performs a block factoring in the frontal matrix that entails a high-performance arithmetic on each processor and ensures a good scalability in shared-memory systems. Many years of experience with this solver in the SCAD software system have shown the method’s high efficiency and reliability with various large-scale problems of structural mechanics (hundreds of thousands to millions of equations). Keywords: finite element method, large-scale problems, multifrontal method, sparse matrices, ordering, multithreading. 1. I NTRODUCTION The dimensionality of present-day design models for multistorey buildings and other engineering structures may reach 1,000,000 to 2,000,000 equations and still grows up. Therefore, one of most important issues with modern CAD software is how fast and reliably their solver subsystems can solve large-scale problems.
    [Show full text]
  • A Survey of Direct Methods for Sparse Linear Systems
    A survey of direct methods for sparse linear systems Timothy A. Davis, Sivasankaran Rajamanickam, and Wissam M. Sid-Lakhdar Technical Report, Department of Computer Science and Engineering, Texas A&M Univ, April 2016, http://faculty.cse.tamu.edu/davis/publications.html To appear in Acta Numerica Wilkinson defined a sparse matrix as one with enough zeros that it pays to take advantage of them.1 This informal yet practical definition captures the essence of the goal of direct methods for solving sparse matrix problems. They exploit the sparsity of a matrix to solve problems economically: much faster and using far less memory than if all the entries of a matrix were stored and took part in explicit computations. These methods form the backbone of a wide range of problems in computational science. A glimpse of the breadth of applications relying on sparse solvers can be seen in the origins of matrices in published matrix benchmark collections (Duff and Reid 1979a) (Duff, Grimes and Lewis 1989a) (Davis and Hu 2011). The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them. 1 Wilkinson actually defined it in the negation: \The matrix may be sparse, either with the non-zero elements concentrated ... or distributed in a less systematic manner. We shall refer to a matrix as dense if the percentage of zero elements or its distribution is such as to make it uneconomic to take advantage of their presence." (Wilkinson and Reinsch 1971), page 191, emphasis in the original.
    [Show full text]
  • Numerical Analysis Group Progress Report
    RAL 94-062 NUMERICAL ANALYSIS GROUP PROGRESS REPORT January 1991 ± December 1993 Edited by Iain S. Duff Central Computing Department Atlas Centre Rutherford Appleton Laboratory Oxon OX11 0QX June 1994. CONTENTS 1 Introduction (I.S. Duff) ¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 2 2 Sparse Matrix Research ¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 6 2.1 The solution of sparse structured symmetric indefinite linear sets of equations (I.S. Duff and J.K. Reid)¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 6 2.2 The direct solution of sparse unsymmetric linear sets of equations (I.S. Duff and J.K. Reid) ¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 7 2.3 Development of a new frontal solver (I. S. Duff and J. A. Scott) ¼¼¼¼¼¼¼ 10 2.4 A variable-band linear equation frontal solver (J. A. Scott) ¼¼¼¼¼¼¼¼¼ 12 2.5 The use of multiple fronts (I. S.Duff and J. A. Scott)¼¼¼¼¼¼¼¼¼¼¼¼ 13 2.6 MUPS: a MUltifrontal Parallel Solver for sparse unsymmetric sets of linear equations (P.R. Amestoy and I.S. Duff)¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 14 2.7 Sparse matrix factorization on the BBN TC2000 (P.R. Amestoy and I.S. Duff) ¼ 15 2.8 Unsymmetric multifrontal methods (T.A. Davis and I.S. Duff) ¼¼¼¼¼¼¼ 17 2.9 Unsymmetric multifrontal methods for finite-element problems (A.C. Damhaug, I.S. Duff, J.A. Scott, and J.K. Reid) ¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 18 2.10 Automatic scaling of sparse symmetric matrices (J.K. Reid) ¼¼¼¼¼¼¼¼ 19 2.11 Block triangular form and QR decomposition (I.S. Duff and C. Puglisi)¼¼¼¼ 20 2.12 Multifrontal QR factorization in a multiprocessor environment (P.R. Amestoy, I.S. Duff, and C. Puglisi) ¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼¼ 22 2.13 Impact of full QR factorization on sparse multifrontal QR algorithm (I.S.
    [Show full text]
  • Towards Green Multi-Frontal Solver for Adaptive Finite Element Method
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/277587946 Towards Green Multi-frontal Solver for Adaptive Finite Element Method Article in Procedia Computer Science · June 2015 DOI: 10.1016/j.procs.2015.05.240 CITATION READS 1 29 6 authors, including: Konrad Jopek Pawel Gepner AGH University of Science and Technology in … Intel 13 PUBLICATIONS 37 CITATIONS 42 PUBLICATIONS 288 CITATIONS SEE PROFILE SEE PROFILE Jacek Kitowski Maciej Paszynski AGH University of Science and Technology in … AGH University of Science and Technology in … 228 PUBLICATIONS 851 CITATIONS 178 PUBLICATIONS 1,282 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Solver performance analysis and optimization View project Linear computational cost and memory usage hybrid solvers for problems of electromagnetic waves propagation over the human head model View project All content following this page was uploaded by Maciej Paszynski on 02 June 2015. The user has requested enhancement of the downloaded file. Procedia Computer Science Volume 51, 2015, Pages 984–993 ICCS 2015 International Conference On Computational Science Towards green multi-frontal solver for adaptive finite element method H. AbbouEisha1,M.Moshkov1,K.Jopek2,P.Gepner3, J. Kitowski2,andM. Paszy´nski2 1 King Abdullah University of Science and Technology, Thuwal, Saudi Arabia 2 AGH University of Science and Technology, Krakow, Poland 3 Intel Corporation, Swindon, United Kingdom Abstract In this paper we present the optimization of the energy consumption for the multi-frontal solver algorithm executed over two dimensional grids with point singularities. The multi-frontal solver algorithm is controlled by so-called elimination tree, defining the order of elimination of rows from particular frontal matrices, as well as order of memory transfers for Schur complement matrices.
    [Show full text]
  • A Fast Direct Solver for Structured Linear Systems by Recursive Skeletonization ∗
    SIAM J. SCI. COMPUT. c 2012 Society for Industrial and Applied Mathematics Vol. 34, No. 5, pp. A2507–A2532 A FAST DIRECT SOLVER FOR STRUCTURED LINEAR SYSTEMS BY RECURSIVE SKELETONIZATION ∗ † ‡ KENNETH L. HO AND LESLIE GREENGARD Abstract. We present a fast direct solver for structured linear systems based on multilevel matrix compression. Using the recently developed interpolative decomposition of a low-rank matrix in a recursive manner, we embed an approximation of the original matrix into a larger but highly structured sparse one that allows fast factorization and application of the inverse. The algorithm extends the Martinsson–Rokhlin method developed for 2D boundary integral equations and proceeds in two phases: a precomputation phase, consisting of matrix compression and factorization, followed by a solution phase to apply the matrix inverse. For boundary integral equations which are not too oscillatory, e.g., based on the Green functions for the Laplace or low-frequency Helmholtz equa- O N N tions, both phases typically have complexity ( ) in two dimensions, where is the number of discretization points. In our current implementation, the corresponding costs in three dimensions are O(N 3/2)andO(N log N) for precomputation and solution, respectively. Extensive numerical exper- iments show a speedup of ∼100 for the solution phase over modern fast multipole methods; however, the cost of precomputation remains high. Thus, the solver is particularly suited to problems where large numbers of iterations would be required. Such is the case with ill-conditioned linear systems or when the same system is to be solved with multiple right-hand sides.
    [Show full text]
  • Parallel Sparse Direct Methods: a Short Tutorial
    RC 25076, November 10, 2010 Computer Science/Mathematics IBM Research Report Parallel Sparse Direct Methods: A short tutorial Anshul Gupta Business Analytics and Mathematical Sciences IBM T. J. Watson Research Center [email protected] LIMITED DISTRIBUTION NOTICE A shorter version of this report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). Research Division IBM Almaden Austin China Haifa India Tokyo Watson Zurich Parallel Sparse Direct Methods: A short tutorial Anshul Gupta Business Analytics and Mathematical Sciences IBM T. J. Watson Research Center Yorktown Heights, NY 10598 [email protected] November 10, 2010 1 Definition ¡¨¤§© © Direct methods for solving linear systems of the form ¡£¢¥¤§¦ are based on computing , where and are lower and upper triangular, respectively. Computing the triangular factors of the coefficient matrix ¡ is also known as LU decomposition. Following the factorization, the original system is trivially solved by ¢¤ ¡ solving the triangular systems © ¤¦ and . If is symmetric, then a factorization of the form ¡¤©£© © ¡¤©£© or is computed via Cholesky factorization, where is a lower triangular matrix (unit lower triangular in the case of ¡¤©£© factorization) and is a diagonal matrix. One set of common formulations of LU decomposition and Cholesky factorization for dense matrices are shown in Figures 1 and 2, respectively.
    [Show full text]
  • Solver Schemes for Linear Systems Oral Comprehensive Exam Position Paper
    Solver Schemes for Linear Systems Oral Comprehensive Exam Position Paper Kanika Sood Department of Computer and Information Science University of Oregon December 9, 2016 1 Introduction This report presents different approaches to solving sparse linear systems|direct and iterative solvers|and compares a number of methods in each category. I also consider the two techniques of using solvers for linear systems: single method solver schemes and multi-method solver schemes. I also present some of the popular systems that use these techniques. This involves surveying the relevant literature that contains in-depth information about techniques in the field. Linear systems are a form of representation for problems in a variety of domains, including but not limited to statistics, thermodynamics, electric circuits, quantum mechanics, nuclear engi- neering, fossil fuels, robotics and computational fluid dynamics. Solution methods in numerical optimization and PDE-based simulations frequently rely on the efficient solution of large, typically sparse linear systems. Given that linear systems are widespread [75, 74] across different areas of research, providing accurate and efficient solution methods plays a critical role for scientists in these fields. There are two classes of linear systems: sparse and dense. The systems in which most of the elements are zero are known as sparse systems. Those systems for which most of the elements are non-zero are referred to as dense linear systems. A common approach of storing sparse matrices requires only the non-zero elements to be stored, along with their location, instead of storing all the elements including the zero elements. Large sparse linear systems tend to represent real world problems better than dense systems and large sparse systems occur more frequently than small or dense systems.
    [Show full text]
  • Grammar Based Multi-Frontal Solver for Isogeometric Analysis in 1D
    Computer Science 14 (4) 2013 http://dx.doi.org/10.7494/csci.2013.14.4.589 • Krzysztof Kuźnik Maciej Paszyński Victor Calo GRAMMAR BASED MULTI-FRONTAL SOLVER FOR ISOGEOMETRIC ANALYSIS IN 1D Abstract In this paper, we present a multi-frontal direct solver for one-dimensional iso- geometric finite element method. The solver implementation is based on the graph grammar (GG) model. The GG model allows us to express the entire solver algorithm, including generation of frontal matrices, merging, and elimi- nations as a set of basic undividable tasks called graph grammar productions. Having the solver algorithm expressed as GG productions, we can find the par- tial order of execution and create a dependency graph, allowing for scheduling of tasks into shared memory parallel machine. We focus on the implementation of the solver with NVIDIA CUDA on the graphic processing unit (GPU). The solver has been tested for linear, quadratic, cubic, and higher-order B-splines, resulting in logarithmic scalability. Keywords graph grammar, direct solver, isogeometric finite element method, NVIDIA CUDA GPU 589 1 lutego 2014 str. 1/25 590 Krzysztof Kuźnik, Maciej Paszyński, Victor Calo 1. Introduction The finite element method with hierarchical shape functions [6, 7] delivers higher or- der polynomials, but the continuity of the global approximation is only C0 between particular mesh elements. This was the main motivation for developing the isogeome- tric finite element method [5], which utilizes the B-splines as basis functions and, thus, delivers Ck global continuity. The multi-frontal direct solver is a state-of-the art algorithm for solving sparse linear systems of equations generated by finite element discretizations [9, 13].
    [Show full text]
  • High Performance Block Incomplete LU Factorization
    High Performance Block Incomplete LU Factorization Matthias Bollh¨ofer ∗ Olaf Schenk y Fabio Verbosio z August 28, 2019 Abstract Many application problems that lead to solving linear systems make use of preconditioned Krylov subspace solvers to compute their solution. Among the most popular preconditioning approaches are incomplete factorization methods either as single-level approaches or within a multilevel framework. We will present a block incomplete factorization that is based on skillfully blocking the system initially and throughout the factorization. This approach allows for the use of cache-optimized dense matrix kernels such as level-3 BLAS or LAPACK. We will demonstrate how this block approach outperforms the scalar method often by orders of magnitude on modern architectures, paving the way for its prospective use inside various multilevel incomplete factorization approaches or other applications where the core part relies on an incomplete factorization. Keywords: sparse matrices, incomplete LU factorizations, block-structured methods, dense matrix kernels, block ILU. 1 Introduction Many application problems lead to solving linear systems of type Ax = b; where A is an n × n nonsingular real or complex system matrix and b is the associated right-hand side. In particular we are interested in the case where A is large-scale and sparse. The generic way of solving these systems nowadays consists of using state-of-the-art sparse direct solvers (cf., e.g., [3, 40, 10, 31]). Although high performance sparse direct solvers are very efficient in many cases, several structured problems, i.e., problems presenting specific, noticeable sparsity structures, cause the direct solver to produce a significant amount of fill-in during the factorization, leading to high memory requirements which can exceed the hardware capability.
    [Show full text]
  • A Parallel Sparse Direct Finite Element Solver for Desktop Computers
    CMM-2011 – Computer Methods in Mechanics 9–12 May 2011, Warsaw, Poland A parallel sparse direct finite element solver for desktop computers Sergiy Fialko 1 1Department of Physics, Mathematics and Computer Science of Tadeusz Kosciuszko Cracow University of Technology Warszawska 24 St., 31-155 Kraków, Poland e-mail: [email protected] Abstract The parallel finite element solver, which is based on block L·S·LT factoring, where S is a sign diagonal, is presented. The sparse symmetric positive definite and indefinite matrices are considered. Unlike the methods presented in the libraries of high-performance procedures, including PARDISO solver, the proposed approach uses the disk storage, if the dimension of the problem exceeds the capacity of random access memory RAM. Two out of core modes OOC and OOC1 are developed. The OOC mode produces the minimal I/O operations, ensures the stable speed-up when the number of processors increases, but requires respectively larger amount of RAM. The OOC1 mode demands the smaller amount of RAM, but the number of I/O operations is significantly greater than in OOC mode and speed up is much poorer. The presented method is essentially faster than multi-frontal solver, especially on multiple processors, because it is not using redundant data transfer from one memory buffer to another. Keywords: sparse direct solver, finite element method, parallel computing, multithreading, high performance, desktop computers OOC1 mode, which allows one to solve large problems on 1. Introduction computers with small memory, such as laptops. Currently, large engineering problems are increasingly 2. Analysis phase being solved on the desktop, rather than on workstations, clusters, and computer networks.
    [Show full text]