PARDISO User Guide Version
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Superlu Users' Guide
SuperLU Users' Guide Xiaoye S. Li1 James W. Demmel2 John R. Gilbert3 Laura Grigori4 Meiyue Shao5 Ichitaro Yamazaki6 September 1999 Last update: August 2011 1Lawrence Berkeley National Lab, MS 50F-1650, 1 Cyclotron Rd, Berkeley, CA 94720. ([email protected]). This work was supported in part by the Director, Office of Advanced Scientific Computing Research of the U.S. Department of Energy under Contract No. D-AC02-05CH11231. 2Computer Science Division, University of California, Berkeley, CA 94720. ([email protected]). The research of Demmel and Li was supported in part by NSF grant ASC{9313958, DOE grant DE{FG03{ 94ER25219, UT Subcontract No. ORA4466 from ARPA Contract No. DAAL03{91{C0047, DOE grant DE{ FG03{94ER25206, and NSF Infrastructure grants CDA{8722788 and CDA{9401156. 3Department of Computer Science, University of California, Santa Barbara, CA 93106. ([email protected]). The research of this author was supported in part by the Institute for Mathematics and Its Applications at the University of Minnesota and in part by DARPA Contract No. DABT63-95-C0087. Copyright c 1994-1997 by Xerox Corporation. All rights reserved. 4INRIA Saclay-Ile de France, Laboratoire de Recherche en Informatique, Universite Paris-Sud 11. ([email protected]) 5Department of Computing Science and HPC2N, Ume˚a University, SE-901 87, Ume˚a, Sweden. ([email protected]) 6Innovative Computing Laboratory, Department of Electrical Engineering and Computer Science, The University of Tennessee. ([email protected]). The research of this author was supported in part by the Director, Office of Advanced Scientific Computing Research of the U.S. -
Cygwin User's Guide
Cygwin User’s Guide Cygwin User’s Guide ii Copyright © Cygwin authors Permission is granted to make and distribute verbatim copies of this documentation provided the copyright notice and this per- mission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this documentation under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this documentation into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the Free Software Foundation. Cygwin User’s Guide iii Contents 1 Cygwin Overview 1 1.1 What is it? . .1 1.2 Quick Start Guide for those more experienced with Windows . .1 1.3 Quick Start Guide for those more experienced with UNIX . .1 1.4 Are the Cygwin tools free software? . .2 1.5 A brief history of the Cygwin project . .2 1.6 Highlights of Cygwin Functionality . .3 1.6.1 Introduction . .3 1.6.2 Permissions and Security . .3 1.6.3 File Access . .3 1.6.4 Text Mode vs. Binary Mode . .4 1.6.5 ANSI C Library . .4 1.6.6 Process Creation . .5 1.6.6.1 Problems with process creation . .5 1.6.7 Signals . .6 1.6.8 Sockets . .6 1.6.9 Select . .7 1.7 What’s new and what changed in Cygwin . .7 1.7.1 What’s new and what changed in 3.2 . -
A New Analysis of Iterative Refinement and Its Application to Accurate Solution of Ill-Conditioned Sparse Linear Systems Carson
A New Analysis of Iterative Refinement and its Application to Accurate Solution of Ill-Conditioned Sparse Linear Systems Carson, Erin and Higham, Nicholas J. 2017 MIMS EPrint: 2017.12 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 A NEW ANALYSIS OF ITERATIVE REFINEMENT AND ITS APPLICATION TO ACCURATE SOLUTION OF ILL-CONDITIONED SPARSE LINEAR SYSTEMS∗ ERIN CARSONy AND NICHOLAS J. HIGHAMz Abstract. Iterative refinement is a long-standing technique for improving the accuracy of a computed solution to a nonsingular linear system Ax = b obtained via LU factorization. It makes use of residuals computed in extra precision, typically at twice the working precision, and existing results guarantee convergence if the matrix A has condition number safely less than the reciprocal of the unit roundoff, u. We identify a mechanism that allows iterative refinement to produce solutions with normwise relative error of order u to systems with condition numbers of order u−1 or larger, provided that the update equation is solved with a relative error sufficiently less than 1. A new rounding error analysis is given and its implications are analyzed. Building on the analysis, we develop a GMRES-based iterative refinement method (GMRES-IR) that makes use of the computed LU factors as preconditioners. GMRES-IR exploits the fact that even if A is extremely ill conditioned the LU factors contain enough information that preconditioning can greatly reduce the condition number of A. -
Solving Systems of Linear Equations on the CELL Processor Using Cholesky Factorization – LAPACK Working Note 184
Solving Systems of Linear Equations on the CELL Processor Using Cholesky Factorization – LAPACK Working Note 184 Jakub Kurzak1, Alfredo Buttari1, Jack Dongarra1,2 1Department of Computer Science, University Tennessee, Knoxville, Tennessee 37996 2Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee, 37831 May 10, 2007 ABSTRACT: The STI CELL processor introduces 1 Motivation pioneering solutions in processor architecture. At the same time it presents new challenges for the devel- In numerical computing, there is a fundamental per- opment of numerical algorithms. One is effective ex- formance advantage of using single precision float- ploitation of the differential between the speed of sin- ing point data format over double precision data for- gle and double precision arithmetic; the other is effi- mat, due to more compact representation, thanks to cient parallelization between the short vector SIMD which, twice the number of single precision data ele- cores. In this work, the first challenge is addressed ments can be stored at each stage of the memory hi- by utilizing a mixed-precision algorithm for the solu- erarchy. Short vector SIMD processing provides yet tion of a dense symmetric positive definite system of more potential for performance gains from using sin- linear equations, which delivers double precision ac- gle precision arithmetic over double precision. Since curacy, while performing the bulk of the work in sin- the goal is to process the entire vector in a single gle precision. The second challenge is approached by operation, the computation throughput can be dou- introducing much finer granularity of parallelization bled when the data representation is halved. -
Linear Algebra Libraries
Linear Algebra Libraries Claire Mouton [email protected] March 2009 Contents I Requirements 3 II CPPLapack 4 III Eigen 5 IV Flens 7 V Gmm++ 9 VI GNU Scientific Library (GSL) 11 VII IT++ 12 VIII Lapack++ 13 arXiv:1103.3020v1 [cs.MS] 15 Mar 2011 IX Matrix Template Library (MTL) 14 X PETSc 16 XI Seldon 17 XII SparseLib++ 18 XIII Template Numerical Toolkit (TNT) 19 XIV Trilinos 20 1 XV uBlas 22 XVI Other Libraries 23 XVII Links and Benchmarks 25 1 Links 25 2 Benchmarks 25 2.1 Benchmarks for Linear Algebra Libraries . ....... 25 2.2 BenchmarksincludingSeldon . 26 2.2.1 BenchmarksforDenseMatrix. 26 2.2.2 BenchmarksforSparseMatrix . 29 XVIII Appendix 30 3 Flens Overloaded Operator Performance Compared to Seldon 30 4 Flens, Seldon and Trilinos Content Comparisons 32 4.1 Available Matrix Types from Blas (Flens and Seldon) . ........ 32 4.2 Available Interfaces to Blas and Lapack Routines (Flens and Seldon) . 33 4.3 Available Interfaces to Blas and Lapack Routines (Trilinos) ......... 40 5 Flens and Seldon Synoptic Comparison 41 2 Part I Requirements This document has been written to help in the choice of a linear algebra library to be included in Verdandi, a scientific library for data assimilation. The main requirements are 1. Portability: Verdandi should compile on BSD systems, Linux, MacOS, Unix and Windows. Beyond the portability itself, this often ensures that most compilers will accept Verdandi. An obvious consequence is that all dependencies of Verdandi must be portable, especially the linear algebra library. 2. High-level interface: the dependencies should be compatible with the building of the high-level interface (e. -
Error Bounds from Extra-Precise Iterative Refinement
Error Bounds from Extra-Precise Iterative Refinement JAMES DEMMEL, YOZO HIDA, and WILLIAM KAHAN University of California, Berkeley XIAOYES.LI Lawrence Berkeley National Laboratory SONIL MUKHERJEE Oracle and E. JASON RIEDY University of California, Berkeley We present the design and testing of an algorithm for iterative refinement of the solution of linear equations where the residual is computed with extra precision. This algorithm was originally proposed in 1948 and analyzed in the 1960s as a means to compute very accurate solutions to all but the most ill-conditioned linear systems. However, two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard has essentially removed the first obstacle. To overcome the second obstacle, we show how the application of iterative refinement can be used to compute an error bound in any norm at small cost and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. This research was supported in part by the NSF Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF-0444486; the DOE Grant Nos. DE-FG03-94ER25219, DE-FC03- 98ER25351, and DE-FC02-01ER25478; and the National Science Foundation Graduate Research Fellowship. The authors wish to acknowledge the contribution from Intel Corporation, Hewlett- Packard Corporation, IBM Corporation, and the National Science Foundation grant EIA-0303575 in making hardware and software available for the CITRIS Cluster which was used in producing these research results. -
Technical Computing on the OS … That Is Not Linux! Or How to Leverage Everything You‟Ve Learned, on a Windows Box As Well
Tools of the trade: Technical Computing on the OS … that is not Linux! Or how to leverage everything you‟ve learned, on a Windows box as well Sean Mortazavi & Felipe Ayora Typical situation with TC/HPC folks Why I have a Windows box How I use it It was in the office when I joined Outlook / Email IT forced me PowerPoint I couldn't afford a Mac Excel Because I LIKE Windows! Gaming It's the best gaming machine Technical/Scientific computing Note: Stats completely made up! The general impression “Enterprise community” “Hacker community” Guys in suits Guys in jeans Word, Excel, Outlook Emacs, Python, gmail Run prepackaged stuff Builds/runs OSS stuff Common complaints about Windows • I have a Windows box, but Windows … • Is hard to learn… • Doesn‟t have a good shell • Doesn‟t have my favorite editor • Doesn‟t have my favorite IDE • Doesn‟t have my favorite compiler or libraries • Locks me in • Doesn‟t play well with OSS • …. • In summary: (More like ) My hope … • I have a Windows box, and Windows … • Is easy to learn… • Has excellent shells • Has my favorite editor • Supports my favorite IDE • Supports my compilers and libraries • Does not lock me in • Plays well with OSS • …. • In summary: ( or at least ) How? • Recreating a Unix like veneer over windows to minimize your learning curve • Leverage your investment in know how & code • Showing what key codes already run natively on windows just as well • Kicking the dev tires using cross plat languages Objective is to: Help you ADD to your toolbox, not take anything away from it! At a high level… • Cygwin • SUA • Windowing systems “The Unix look & feel” • Standalone shell/utils • IDE‟s • Editors General purpose development • Compilers / languages / Tools • make • Libraries • CAS environments Dedicated CAS / IDE‟s And if there is time, a couple of demos… Cygwin • What is it? • A Unix like environment for Windows. -
User Guide for CHOLMOD: a Sparse Cholesky Factorization and Modification Package
User Guide for CHOLMOD: a sparse Cholesky factorization and modification package Timothy A. Davis [email protected], http://www.suitesparse.com VERSION 3.0.14, Oct 22, 2019 Abstract CHOLMOD1 is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or AAT, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level-3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. It also includes a non-supernodal LDLT factorization method that can factorize symmetric indefinite matrices if all of their leading submatrices are well-conditioned (D is diagonal). CHOLMOD is written in ANSI/ISO C, with both C and MATLAB interfaces. This code works on Microsoft Windows and many versions of Unix and Linux. CHOLMOD Copyright c 2005-2015 by Timothy A. Davis. Portions are also copyrighted by William W. Hager (the Modify Module), and the University of Florida (the Partition and Core Modules). All Rights Reserved. See CHOLMOD/Doc/License.txt for the license. CHOLMOD is also available under other licenses that permit its use in proprietary applications; contact the authors for details. See http://www.suitesparse.com for the code and all documentation, including this User Guide. 1CHOLMOD is short for CHOLesky MODification, since a key feature of the package is its ability to up- date/downdate a sparse Cholesky factorization 1 Contents 1 Overview 8 2 Primary routines and data structures 9 3 Simple example program 11 4 Installation of the C-callable library 12 5 Using CHOLMOD in MATLAB 15 5.1 analyze: order and analyze . -
Three-Precision GMRES-Based Iterative Refinement for Least Squares Problems Carson, Erin and Higham, Nicholas J. and Pranesh, Sr
Three-Precision GMRES-Based Iterative Refinement for Least Squares Problems Carson, Erin and Higham, Nicholas J. and Pranesh, Srikara 2020 MIMS EPrint: 2020.5 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 THREE-PRECISION GMRES-BASED ITERATIVE REFINEMENT FOR LEAST SQUARES PROBLEMS∗ ERIN CARSONy , NICHOLAS J. HIGHAMz , AND SRIKARA PRANESHz Abstract. The standard iterative refinement procedure for improving an approximate solution m×n to the least squares problem minx kb − Axk2, where A 2 R with m ≥ n has full rank, is based on solving the (m + n) × (m + n) augmented system with the aid of a QR factorization. In order to exploit multiprecision arithmetic, iterative refinement can be formulated to use three precisions, but the resulting algorithm converges only for a limited range of problems. We build an iterative refinement algorithm called GMRES-LSIR, analogous to the GMRES-IR algorithm developed for linear systems [SIAM J. Sci. Comput., 40 (2019), pp. A817-A847], that solves the augmented system using GMRES preconditioned by a matrix based on the computed QR factors. We explore two left preconditioners; the first has full off-diagonal blocks and the second is block diagonal and can be applied in either left-sided or split form. We prove that for a wide range of problems the first preconditioner yields backward and forward errors for the augmented system of order the working precision under suitable assumptions on the precisions and the problem conditioning. -
Apple File System Reference
Apple File System Reference Developer Contents About Apple File System 7 General-Purpose Types 9 paddr_t .................................................. 9 prange_t ................................................. 9 uuid_t ................................................... 9 Objects 10 obj_phys_t ................................................ 10 Supporting Data Types ........................................... 11 Object Identifier Constants ......................................... 12 Object Type Masks ............................................. 13 Object Types ................................................ 14 Object Type Flags .............................................. 20 EFI Jumpstart 22 Booting from an Apple File System Partition ................................. 22 nx_efi_jumpstart_t ........................................... 24 Partition UUIDs ............................................... 25 Container 26 Mounting an Apple File System Partition ................................... 26 nx_superblock_t ............................................. 27 Container Flags ............................................... 36 Optional Container Feature Flags ...................................... 37 Read-Only Compatible Container Feature Flags ............................... 38 Incompatible Container Feature Flags .................................... 38 Block and Container Sizes .......................................... 39 nx_counter_id_t ............................................. 39 checkpoint_mapping_t ........................................ -
Using Mixed Precision in Numerical Computations to Speedup Linear Algebra Solvers
Using Mixed Precision in Numerical Computations to Speedup Linear Algebra Solvers Jack Dongarra, UTK/ORNL/U Manchester Azzam Haidar, Nvidia Nick Higham, U of Manchester Stan Tomov, UTK Slides can be found: http://bit.ly/icerm-05-2020-dongarra 5/7/20 1 Background • My interest in mixed precision began with my dissertation … § Improving the Accuracy of Computed Matrix Eigenvalues • Compute the eigenvalues and eigenvectors in low precision then improve selected values/vectors to higher precision for O(n2) ops using the the matrix decomposition § Extended to singular values, 1983 2 § Algorithm in TOMS 710, 1992 IBM’s Cell Processor - 2004 • 9 Cores § Power PC at 3.2 GHz § 8 SPEs • 204.8 Gflop/s peak! $600 § The catch is that this is for 32 bit fl pt; (Single Precision SP) § 64 bit fl pt peak at 14.6 Gflop/s • 14 times slower that SP; factor of 2 because of DP and 7 because of latency issues The SPEs were fully IEEE-754 compliant in double precision. In single precision, they only implement round-towards-zero, denormalized numbers are flushed to zero and NaNs are treated like normal numbers. Mixed Precision Idea Goes Something Like This… • Exploit 32 bit floating point as much as possible. § Especially for the bulk of the computation • Correct or update the solution with selective use of 64 bit floating point to provide a refined results • Intuitively: § Compute a 32 bit result, § Calculate a correction to 32 bit result using selected higher precision and, § Perform the update of the 32 bit results with the correction using high precision. -
Automatic Performance Tuning of Sparse Matrix Kernels
Automatic Performance Tuning of Sparse Matrix Kernels by Richard Wilson Vuduc B.S. (Cornell University) 1997 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor James W. Demmel, Chair Professor Katherine A. Yelick Professor Sanjay Govindjee Fall 2003 The dissertation of Richard Wilson Vuduc is approved: Chair Date Date Date University of California, Berkeley Fall 2003 Automatic Performance Tuning of Sparse Matrix Kernels Copyright 2003 by Richard Wilson Vuduc 1 Abstract Automatic Performance Tuning of Sparse Matrix Kernels by Richard Wilson Vuduc Doctor of Philosophy in Computer Science University of California, Berkeley Professor James W. Demmel, Chair This dissertation presents an automated system to generate highly efficient, platform- adapted implementations of sparse matrix kernels. These computational kernels lie at the heart of diverse applications in scientific computing, engineering, economic modeling, and information retrieval, to name a few. Informally, sparse kernels are computational oper- ations on matrices whose entries are mostly zero, so that operations with and storage of these zero elements may be eliminated. The challenge in developing high-performance im- plementations of such kernels is choosing the data structure and code that best exploits the structural properties of the matrix|generally unknown until application run-time|for high-performance on the underlying machine architecture (e.g., memory hierarchy con- figuration and CPU pipeline structure). We show that conventional implementations of important sparse kernels like sparse matrix-vector multiply (SpMV) have historically run at 10% or less of peak machine speed on cache-based superscalar architectures.