High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms
Total Page:16
File Type:pdf, Size:1020Kb
Western University Scholarship@Western Electronic Thesis and Dissertation Repository 8-24-2018 11:00 AM High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms Alex Brandt The University of Western Ontario Supervisor Moreno Maza, Marc The University of Western Ontario Graduate Program in Computer Science A thesis submitted in partial fulfillment of the equirr ements for the degree in Master of Science © Alex Brandt 2018 Follow this and additional works at: https://ir.lib.uwo.ca/etd Part of the Algebra Commons, Numerical Analysis and Scientific Computing Commons, and the Theory and Algorithms Commons Recommended Citation Brandt, Alex, "High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms" (2018). Electronic Thesis and Dissertation Repository. 5593. https://ir.lib.uwo.ca/etd/5593 This Dissertation/Thesis is brought to you for free and open access by Scholarship@Western. It has been accepted for inclusion in Electronic Thesis and Dissertation Repository by an authorized administrator of Scholarship@Western. For more information, please contact [email protected]. Abstract Polynomials may be represented sparsely in an effort to conserve memory usage and pro- vide a succinct and natural representation. Moreover, polynomials which are themselves sparse { have very few non-zero terms { will have wasted memory and computation time if represented, and operated on, densely. This waste is exacerbated as the number of variables increases. We provide practical implementations of sparse multivariate data structures focused on data locality and cache complexity. We look to develop high-performance algorithms and implementations of fundamen- tal polynomial operations, using these sparse data structures, such as arithmetic (ad- dition, subtraction, multiplication, and division) and interpolation. We revisit a sparse arithmetic scheme introduced by Johnson in 1974, adapting and optimizing these algo- rithms for modern computer architectures, with our implementations over the integers and rational numbers vastly outperforming the current wide-spread implementations. We develop a new algorithm for sparse pseudo-division based on the sparse polynomial division algorithm, with very encouraging results. Polynomial interpolation is explored through univariate, dense multivariate, and sparse multivariate methods. Arithmetic and interpolation together form a solid high-performance foundation from which many higher-level and more interesting algorithms can be built. Keywords: Sparse Polynomials, Polynomial Arithmetic, Polynomial Interpolation, Data Structures, High-Performance, Polynomial Multiplication, Polynomial Division, Pseudo-Division i Contents Abstract i List of Tables iv List of Figures v List of Algorithms vi 1 Introduction 1 1.1 Existing Computer Algebra Systems and Software . 2 1.2 Reinventing the Sparse Polynomial Wheel? . 3 1.3 Contributions . 4 2 Background 6 2.1 Memory, Cache, and Locality . 6 2.1.1 Cache Performance and Cache Complexity . 7 2.2 Algebra . 9 2.2.1 The Many Flavours of Rings . 9 2.2.2 Polynomials: Rings, Definitions, and Notations . 11 2.2.3 Arithmetic in a Polynomial Ring . 14 Pseudo-division . 16 2.2.4 Gr¨obnerBases, Ideals, and Reduction . 17 2.2.5 Algebraic Geometry . 20 2.2.6 (Numerical) Linear Algebra . 21 2.3 Representing Polynomials . 22 2.4 Working with Sparse Polynomials . 24 2.5 Interpolation & Curve Fitting . 27 2.5.1 Lagrange Interpolation . 30 2.5.2 Newton Interpolation . 31 2.5.3 Curve Fitting and Linear Least Squares . 32 2.6 Symbols and Notation . 33 3 Memory-Conscious Polynomial Representations 35 3.1 Coefficients, Monomials, and Exponent Packing . 36 3.2 Linked Lists . 38 3.3 Alternating Arrays . 38 ii 3.4 Recursive Arrays . 42 4 Polynomial Arithmetic 45 4.1 In-place Addition and Subtraction . 45 4.2 Multiplication . 47 4.2.1 Implementation . 51 Heap Optimizations . 52 4.2.2 Experimentation . 54 4.3 Division . 59 4.3.1 Implementation . 63 4.3.2 Experimentation . 64 4.4 Pseudo-Division . 66 4.4.1 Implementation . 70 4.4.2 Experimentation . 72 4.5 Normal Form and Multi-Divisor Pseudo-Division . 75 5 Symbolic and Numeric Polynomial Interpolation 76 5.1 Univariate Polynomial Interpolation . 77 5.2 Dense Multivariate Interpolation . 81 5.2.1 Implementing Early Termination for Multivariate Interpolation . 83 5.2.2 The Difficulties of Multivariate Interpolation . 85 5.2.3 Rational Function Interpolation . 88 5.3 Sparse Multivariate Interpolation . 90 5.3.1 Probabilistic Method . 90 5.3.2 Deterministic Method . 94 5.3.3 Experimentation . 97 5.4 Numerical Interpolation (& Curve Fitting) . 99 6 Conclusions and Future Work 101 Bibliography 103 Curriculum Vitae 108 iii List of Tables 4.1 Comparing heap implementations with and without chaining . 56 4.2 Comparing pseudo-division on triangular decompositions . 75 iv List of Figures 2.1 Array representation of a dense univariate polynomial . 22 2.2 Array representation of a dense recursive multivariate polynomial . 24 3.1 A 3-variable exponent vector packed into a single machine word. 37 3.2 A polynomial encoded as a linked list . 38 3.3 A polynomial encoded as an alternating array . 39 3.4 Comparing linked list and alternating array implementations of polynomial addition . 40 3.5 Comparing cache misses in polynomial addition . 41 3.6 Array representation of a dense recursive multivariate polynomial . 42 3.7 An alternating array and recursive array polynomial representation . 43 4.1 Alternating array representation showing GMP trees . 46 4.2 Comparing in-place and out-of-place polynomial addition . 47 4.3 An example heap of integers . 48 4.4 A chained heap of product terms . 55 4.5 Comparing multiplication of integer polynomials . 57 4.6 Comparing multiplication of rational number polynomials . 58 4.7 Comparing cache misses in polynomial multiplication . 58 4.8 Comparing division of integer polynomials . 65 4.9 Comparing division of rational number polynomials . 65 4.10 A recursive polynomial representation for a pseudo-quotient . 71 4.11 Comparing pseudo-division of integer polynomials . 73 4.12 Comparing pseudo-division of rational number polynomials . 73 4.13 Comparing naive and heap-based pseudo-division . 74 5.1 Comparing univariate interpolation implementations . 80 5.2 A collection of points satisfying condition GC . 87 5.3 Comparing probabilistic and deterministic sparse interpolation algorithms 98 v List of Algorithms 2.1 polynomialDivision ............................ 16 2.2 na¨ıvePseudoDivision ........................... 17 2.3 addPolynomials .............................. 26 2.4 multiplyPolynomials ........................... 28 4.1 heapMultiplyPolynomials ........................ 49 4.2 dividePolynomials ............................. 59 4.3 heapDividePolynomials ......................... 61 4.4 pseudoDividePolynomials ........................ 67 4.5 heapPseudoDividePolynomials ..................... 69 5.1 LagrangeInterpolation ......................... 79 5.2 multiplyByBinomial InPlace ...................... 80 5.3 denseInterpolation Driver ....................... 84 5.4 sparseInterpolation StageI ...................... 93 5.5 sparseInterpolation ........................... 93 5.6 sparseInterpolation StageI Deterministic ............. 96 5.7 sparseInterpolation Deterministic .................. 96 vi Chapter 1 Introduction In the world of computer algebra and scientific computing, high-performance is paramount. Due to the exact nature of symbolic computations (in fact that is one of the defining characteristics of computer algebra) it is possible to be far more expressive than in nu- merical methods, leading to more complicated mathematical formulas and theories. But as these formulas, problems, and data sets grow large, the intermediate expressions which arise during the process of solving these problems grow even larger. This expression swell is one reason why computer algebra has historically been seen as slow and impractical. Compared to that of numerical analysis, where floating point numbers occupy a fixed amount of space regardless of their value, symbolic computation uses arbitrary precision, allowing numbers to grow and grow. Hence, careful implementation is needed to avoid such drastic slow downs and make symbolic computation able to be practical and useful in solving mathematical and scientific problems. One such problem we are concerned with is polynomial system solving. Of course, this is an important problem with applications in every scientific discipline. Algorithms for solving these systems typically rely on some core operation after some algebraic tricks attempt to reduce the system's complexity. This core operation could be based on Gr¨obnerbases or triangular decompositions. We are motivated by the efforts to ob- tain an efficient and open source implementation of triangular decompositions using the theory of regular chains [4]. Such algorithms [18] have already been integrated into the computer algebra system Maple as part of the RegularChains library [53]. However, we believe that we can do better. The Basic Polynomial Algebra Subprograms (BPAS) library [2] is an open-source li- brary for high performance polynomial operations, such as arithmetic, real root isolation, and polynomial system solving. It is mainly written in C for performance, with a C++ wrapper interface for portability, object-oriented programming, and end-user usability. Moreover, it makes use of the Cilk extension [52] for parallelization and improved per- formance on multi-core processors. It is within