LCM: Lectures on Computational Mathematics Przemysław Koprowski Septempber 10, 2015 – April 25, 2021 These notes were compiled on April 25, 2021. Since I update the notes quite frequently, it is possible that a newer version is already available. Please check on: http://www.pkoprowski.eu/lcm 1 Contents Contents 3 1 Fundamental algorithms 9 1.1 Integers and univariate polynomials . 9 1.2 Floating point numbers . 33 1.3 Euclidean algorithm . 38 1.4 Rational numbers and rational functions . 57 1.5 Multivariate polynomials . 58 1.6 Transcendental constants . 58 1.A Scaled Bernstein polynomials . 58 2 Modular arithmetic 79 2.1 Basic modular arithmetic . 79 2.2 Chinese remainder theorem . 82 2.3 Modular exponentiation . 87 2.4 Modular square roots . 92 2.5 Applications of modular arithmetic . 109 3 Matrices 115 3.1 Matrix arithmetic . 115 3.2 Gauss elimination . 118 3.3 Matrix decomposition . 126 3.4 Determinant . 126 3.5 Systems of linear equations . 147 3.6 Eigenvalues and eigenvectors . 147 3.7 Short vectors . 155 3.8 Special matrices . 168 4 Contents 4 Reconstruction and interpolation 175 4.1 Polynomial interpolation . 175 4.2 Runge’s phenomenon and Chebyshev nodes . 187 4.3 Piecewise polynomial interpolation . 193 4.4 Fourier transform and fast multiplication . 194 5 Factorization and irreducibility tests 205 5.1 Primality tests . 206 5.2 Integer factorization . 215 5.3 Square-free decomposition of polynomials . 216 5.4 Applications of square-free decomposition . 227 5.5 Polynomial factorization over finite fields . 232 5.6 Polynomial factorization over rationals . 237 6 Univariate polynomial equations 239 6.1 Generalities . 239 6.2 Explicit formulæ for low degree polynomials . 242 6.3 Bounds on roots . 246 6.4 Distances between roots . 264 6.5 Root counting and isolating . 268 6.6 Root approximation . 286 6.7 Higher degree solvable polynomials . 305 6.8 Strategies for solving univariate polynomial equations . 305 6.9 Thom’s lemma . 305 7 Integration 309 7.1 Symbolic integration of rational functions . 309 7.2 Classical numeric integration schemes . 312 7.3 Monte Carlo integration . 312 8 Classical elimination theory 313 8.1 Labatie–Kalkbrener triangulation . 315 8.2 Resultant . 327 8.3 Applications of resultants . 358 8.4 Subresultants . 375 9 Gröbner bases 381 9.1 Algebraic prerequisites . 381 9.2 Monomial orders . 383 9.3 Gauss elimination revisited . 385 9.4 Multivariate polynomial reduction . 386 9.5 Gröbner bases . 389 9.6 Buchberger’s criterion . 390 9.7 Minimal and reduced Gröbner bases . 392 Notes homepage: http://www.pkoprowski.eu/lcm Contents 5 9.8 Elimination of variables . 394 10 Zero-dimensional polynomial systems 399 A Fundamental concepts from algebra 401 Bibliography 403 Built: April 25, 2021 Introduction Witchcraft to the ignorant,. simple science to the learned. “The Sorcerer of Rhiannon”, L. Brackett Computational mathematics is the least common multiple of mathematics and al- gorithms. This is a ‘work-in-progress’ version of lecture notes to “Introduction to com- putational mathematics” and “Computational mathematics” courses. Terry Pratchett in his novel “Interesting times” says that “Magic isn’t like maths”. Assuming symmetry of the relation, this implies that mathematics is not like magic, either. Even if they both begin with ‘ma’. They both provide their adepts with some tools to perform certain actions: spells in the case of magic and theorem in mathemat- ics. The difference is that in mathematics one knows why and how a given tool works. One may read a proof of the theorem and verify its correctness. A mathematician can check how assumptions of the theorem are used and what would happen if they were omitted. On the contrary, in magic (as far as I understand) a wizard casts a spell to obtain the desired result but he does not know why and how the spell works, which ingredients of the spell are important and what would happen if he said ‘dabraabraca’ instead of ‘abracadabra’. Nowadays, there are many mathematically oriented computer programs (co-called Computer Algebra Systems or CASs for short). Some of them are paid. Some are free. Unfortunately, many users treat them like oracles. Without understanding the princi- ples of how these systems works, using them is more magic than mathematics. Issuing a command like (x^6 - 5*x^4 + 4*x^2 - x).roots() is not very different from casting a spell, when one doesn’t understand what a system can or cannot do with an unsolvable polynomial (and what it really means that a 8 Contents polynomial is unsolvable). Thus, the aim of these notes is to explain mathematics that operates behind the scene in computer algebra systems. To show that there is really no magic in them. Notes homepage: http://www.pkoprowski.eu/lcm 1 Fundamental algorithms A journey of a thousand miles begins with a single step. “Tao Te Ching”, Laozi This chapter is devoted to introducing representations of the most elementary math- ematical objects, together with accompanying fundamental algorithm, like standard arithmetic operations. These fundamentals seem to be often neglected, but they are indispensable for developing “more serious” algorithms, as they form the basic build- ing blocks of the latter ones. Moreover, these basic arithmetic operations, although shyly hidden inside the big algorithms, are executed over and over again. Thus, their speed have a major impact on the overall performance of their bigger brothers. For foundations of mathematics the most fundamental objects are sets. In the realm of computational mathematics the elementary objects are integers (for symbolic com- putations) and floating-point numbers (for numerical methods). Hence, these two types of objects will be are main subject in the next sections. Moreover, since univari- ate polynomials are very like integers (and algorithms designed for polynomials are usually a little bit simpler) we discuss integers and polynomials together. 1.1 Integers and univariate polynomials Integers and polynomials exhibit lots of similarities from algebraic and algorithmic points of view. On the algebraic side, the ring of integers and the ring of univariate polynomials (over a field) are both Euclidean domains. From the algorithmic point of view, integers and polynomials are just finite sequences of digits and coefficients, respectively. An integer n Z is represented by a signed, finite sequence of digits 2 (nk nk 1 ... n0)β such that ± − k n = nkβ + + n1β + n0 , ± ··· 10 Chapter 1. Fundamental algorithms where the integer β 2 is a fixed base of the representation and every digit ni is an element of the set 0,≥ . , β 1 . For example, let n be a number one hundred ninety- seven thousand six hundredf forty-one− g , then at base ten, the number n is represented by a 6-tuple (197641)10, since 5 4 3 2 1 n = 1 10 + 9 10 + 7 10 + 6 10 + 4 10 + 1. · · · · · On the other hand, the same number is represented as (110000010000001001)2 at base 2 and as (349)256 at base 256. The choice of the base is quite arbitrary. As human beings we are used to the base 10, but the only reason for it is that we have a total of ten fingers on both hands. For computers a more natural choice is a power of 2. The base 2 is used mostly at the lowest, hardware level (in this case there are also some other representations in use, like for instance two’s-complement notation, which we will not discuss in these notes). While at higher levels of abstraction, say in computer algebra systems, the base β is commonly chosen in such a way that a single digit fits into a single memory slot. So for example, at base 256 the digits are stored as bytes, at base 65 536 as 16-bit words. and so on. Here we will not place any restrictions on the base nor on the length of an integer, understood as the number of digits. We will always assume that we are able to store as many digits as we need. Like integers, univariate polynomials are represented as finite sequences. Let R be a (fixed) ring of coefficients. Assume that we have means to represent elements of R and to operate on them. The simplest and most common approach is to represent a polynomial f R[x] as a sequence of its coefficients (fn,..., f1, f0), where f = n 2 fn x + + f1 x + f0. This is called the power basis representation of f , since f0,..., fn are nothing··· else but the coordinates of f with respect to the basis 1, x,..., x k,... of the free R-module R[x]. This is not the only possible representationf of a polynomialg and later we will discuss scaled Bernstein-basis representation (see Appendix 1.A). n Let f = fn x + + f1 x + f0 be a nonzero polynomial. The maximal index k ··· such that fk = 0 is called the degree of f and denoted deg f . The degree of the zero polynomial is6 defined to be . It will be often convenient to allow the coefficients of f to be defined for all integer−∞ indices, with the convention that f j 0 whenever j < 0 or j > deg f . Formally speaking, we view polynomials as infinite≡ sequences indexed by elements of Z with only finitely many nonzero entries. The set supp f := k Z fk = 0 of the indices of nonzero terms is called the support of f . In this fbook2 wej will6 sometimesg (see e.g. Section 6.3 of Chapter 6) encounter the notion of a reciprocal polynomial, that we now introduce. n n Definition 1.1.1. Let f = fn x + + f1 x + f0 be a polynomial. A polynomial f0 x + n 1 ··· + fn 1 x − + fn, obtained from f by reversing the order of coefficients, is called reciprocal··· − polynomial polynomial of f and denoted rev f .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages409 Page
-
File Size-