The Accuracy of Floating Point Summation* Nicholas J
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Finding Rare Numerical Stability Errors in Concurrent Computations
Finding Rare Numerical Stability Errors in Concurrent Computations Karine Even Technion - Computer Science Department - M.Sc. Thesis MSC-2013-17 - 2013 Technion - Computer Science Department - M.Sc. Thesis MSC-2013-17 - 2013 Finding Rare Numerical Stability Errors in Concurrent Computations Research Thesis Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Karine Even Submitted to the Senate of the Technion — Israel Institute of Technology Iyar 5773 Haifa April 2013 Technion - Computer Science Department - M.Sc. Thesis MSC-2013-17 - 2013 Technion - Computer Science Department - M.Sc. Thesis MSC-2013-17 - 2013 This thesis was completed under the supervision of Assistant Prof. Eran Yahav (Department of Computer Science, Technion) and Dr. Hana Chockler (IBM Haifa Research Labs) in the Department of Computer Science. The generous financial support of the Technion is gratefully acknowledged. Technion - Computer Science Department - M.Sc. Thesis MSC-2013-17 - 2013 Technion - Computer Science Department - M.Sc. Thesis MSC-2013-17 - 2013 Contents Abstract 1 1 Introduction 2 2 Overview 6 3 Preliminaries 12 3.1 Programs as Transition Systems . 12 3.2 Cross-Entropy Method . 14 3.3 Cross-Entropy Method in Optimization Problems . 17 3.4 Cross-Entropy Method Over Graphs . 18 4 Cross-Entropy Based Testing for Numerical Stability Problems 22 4.1 Performance Function for Floating Point Summation and Subtrac- tion Algorithms . 23 4.2 CE-Based Testing with Abstraction . 25 5 Evaluation 31 5.1 Implementation . 31 5.2 Benchmarks . 32 5.3 Methodology . 33 5.3.1 Description of Benchmarks . 33 5.3.2 Generating Benchmarks Inputs . -
Accurate and Efficient Sums and Dot Products in Julia
Accurate and Efficient Sums and Dot Products in Julia Chris Elrod, François Févotte To cite this version: Chris Elrod, François Févotte. Accurate and Efficient Sums and Dot Products in Julia. 2019. hal- 02265534v2 HAL Id: hal-02265534 https://hal.archives-ouvertes.fr/hal-02265534v2 Preprint submitted on 14 Aug 2019 (v2), last revised 21 Aug 2019 (v3) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Accurate and Efficient Sums and Dot Products in Julia Chris Elrod Franc¸ois Fevotte´ Eli Lilly TriScale innov Email: [email protected] Palaiseau, France Email: [email protected] Abstract—This paper presents an efficient, vectorized im- all cases. For programs that already use double precision (64- plementation of various summation and dot product algo- bit FP numbers), increasing the precision is not an option, rithms in the Julia programming language. These implemen- since quadruple precision (128-bit FP numbers) remains too tations are available under an open source license in the 1 AccurateArithmetic.jl Julia package. expensive for most purposes . On the other hand, hitting the Besides naive algorithms, compensated algorithms are im- memory wall means that many algorithms (and in particular plemented: the Kahan-Babuska-Neumaierˇ summation algorithm, BLAS1-like algorithms such as summation and dot product) and the Ogita-Rump-Oishi simply compensated summation and are limited by the memory bandwidth in a wide range of vector dot product algorithms. -
Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks
ERROR ANALYSIS AND IMPROVING THE ACCURACY OF WINOGRAD CONVOLUTION FOR DEEP NEURAL NETWORKS BARBARA BARABASZ1, ANDREW ANDERSON1, KIRK M. SOODHALTER2 AND DAVID GREGG1 1 School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland 2 School of Mathematics, Trinity College Dublin, Dublin 2, Ireland Abstract. Popular deep neural networks (DNNs) spend the majority of their execution time computing convolutions. The Winograd family of algorithms can greatly reduce the number of arithmetic operations required and is present in many DNN software frameworks. However, the performance gain is at the expense of a reduction in floating point (FP) numerical accuracy. In this paper, we analyse the worst case FP error and prove the estimation of norm and conditioning of the algorithm. We show that the bound grows exponentially with the size of the convolution, but the error bound of the modified algorithm is smaller than the original one. We propose several methods for reducing FP error. We propose a canonical evaluation ordering based on Huffman coding that reduces summation error. We study the selection of sampling \points" experimentally and find empirically good points for the most important sizes. We identify the main factors associated with good points. In addition, we explore other methods to reduce FP error, including mixed-precision convolution, and pairwise summation across DNN channels. Using our methods we can significantly reduce FP error for a given block size, which allows larger block sizes and reduced computation. 1. Motivation Deep Neural Networks (DNNs) have become powerful tools for image, video, speech and language processing. However, DNNs are very computationally demanding, both during and after training. -
Accuracy, Cost and Performance Trade-Offs for Streaming Set-Wise Floating Point Accumulation on Fpgas Krishna Kumar Nagar University of South Carolina
University of South Carolina Scholar Commons Theses and Dissertations 2013 Accuracy, Cost and Performance Trade-Offs for Streaming Set-Wise Floating Point Accumulation on FPGAs Krishna Kumar Nagar University of South Carolina Follow this and additional works at: https://scholarcommons.sc.edu/etd Part of the Computer Engineering Commons Recommended Citation Nagar, K. K.(2013). Accuracy, Cost and Performance Trade-Offs of r Streaming Set-Wise Floating Point Accumulation on FPGAs. (Doctoral dissertation). Retrieved from https://scholarcommons.sc.edu/etd/3585 This Open Access Dissertation is brought to you by Scholar Commons. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. Accuracy, Cost and Performance Trade-offs for Streaming Set-wise Floating Point Accumulation on FPGAs by Krishna Kumar Nagar Bachelor of Engineering Rajiv Gandhi Proudyogiki Vishwavidyalaya 2007 Master of Engineering University of South Carolina 2009 Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosphy in Computer Engineering College of Engineering and Computing University of South Carolina 2013 Accepted by: Jason D. Bakos, Major Professor Duncan Buell, Committee Member Manton M. Matthews, Committee Member Max Alekseyev, Committee Member Peter G. Binev, External Examiner Lacy Ford, Vice Provost and Dean of Graduate Studies © Copyright by Krishna Kumar Nagar, 2013 All Rights Reserved. ii Dedication Dedicated to Dearest Maa, Paa and beloved Prakriti iii Acknowledgments It took me 6 years, 2 months and 27 days to finish this dissertation. It involved the support, patience and guidance of my friends, family and professors.