The Method of Gauss-Newton to Compute Power Series Solutions of Polynomial Homotopies∗ Nathan Bliss Jan Verschelde University of Illinois at Chicago Department of Mathematics, Statistics, and Computer Science 851 S. Morgan Street (m/c 249), Chicago, IL 60607-7045, USA fnbliss2,[email protected] Abstract We consider the extension of the method of Gauss-Newton from com- plex floating-point arithmetic to the field of truncated power series with complex floating-point coefficients. With linearization we formulate a lin- ear system where the coefficient matrix is a series with matrix coefficients, and provide a characterization for when the matrix series is regular based on the algebraic variety of an augmented system. The structure of the lin- ear system leads to a block triangular system. In the regular case, solving the linear system is equivalent to solving a Hermite interpolation problem. In general, we solve a Hermite-Laurent interpolation problem, via a lower triangular echelon form on the coefficient matrix. We show that this solu- tion has cost cubic in the problem size. With a few illustrative examples, we demonstrate the application to polynomial homotopy continuation. Key words and phrases. Linearization, Gauss-Newton, Hermite inter- polation, polynomial homotopy, power series. 1 Introduction 1.1 Preliminaries A polynomial homotopy is a family of polynomial systems which depend on one parameter. Numerical continuation methods to track solution paths defined by a homotopy are classical, see e.g.: [3] and [23]. Our goal is to improve the algorithms to track solution paths in two ways: ∗This material is based upon work supported by the National Science Foundation under Grant No. 1440534. Date: 30 June 2017. 1 1. Polynomial homotopies define deformations of polynomial systems start- ing at generic instances and moving to specific instances. Tracking so- lution paths that start at singular solutions is not supported by current polynomial homotopy software systems. 2. To predict the next solution along a path, the current path trackers ap- ply extrapolation methods on each coordinate of the solution separately, without taking the interdependencies between the variables into account. Problem statement. We want to define an efficient, numerically stable, and robust algorithm to compute a power series expansion for a solution curve of a polynomial system. The input is a list of polynomials in several variables and a point on a solution curve which vanishes when evaluated at each polynomial in the list. The output of the algorithm is a tuple of series in a parameter t, where t equals one of the variables in the given list of polynomials. Background and related work. As pointed out in [7], polynomials, power series, and Toeplitz matrices are closely related. A direct method to solve block banded Toeplitz systems is presented in [10]. The book [6] is a general reference for methods related to approximations and power series. We found inspiration for the relationship between higher-order Newton-Raphson iterations and Hermite interpolation in [20]. The computation of power series is a classical topic in computer algebra [14]. The authors of [4] propose new algorithms to manipulate polynomials by values via Lagrange interpolation. The Puiseux series field is one of the building blocks of tropical algebraic geometry [22]. In finding the right exponents of the leading powers of the Puiseux series, we rely on tropical methods [9], and in particular on the con- structive proof of the fundamental theorem of tropical algebraic geometry [16], see also [19] and [24]. Our contributions. Via linearization, rewriting matrices of series into series with matrix coefficients, we formulate the problem of computing the updates in Newton's method as a block structured linear algebra problem. For matrix series where the leading coefficient is regular, the solution of the block linear system satisfies the Hermite interpolation problem. For general matrix series, where several of the leading matrix coefficients may be rank deficient, Hermite- Laurent interpolation applies. We distinguish the cases using the algebraic variety of an augmented system. To solve the block diagonal linear system, we propose to reduce the coefficient matrix to a lower triangular echelon form, and we provide a brief analysis of its cost. The source code for the algorithm presented in this paper is archived at github via our accounts nbliss and janverschelde. Acknowledgments. We thank the organizers of the ILAS 2016 minisym- posium on Multivariate Polynomial Computations and Polynomial Systems, Bernard Mourrain, Vanni Noferini, and Marc Van Barel, for giving the sec- ond author the opportunity to present this work. In addition, we are grateful to the anonymous referee who supplied many helpful remarks. 2 1.2 Motivating Example: Pad´eApproximant One motivation for finding a series solution is that once it is obtained, one can directly compute the associated Pad´eapproximant, which often has much better convergence properties. Pad´eapproximants [6] are applied in symbolic deformation algorithms [18]. In this section we reproduce [6, Figure 1.1.1] in the context of polynomial homotopy continuation. Consider the homotopy (1 − t)(x2 − 1) + t(3x2 − 3=2) = 0: (1) 1 + t=21=2 The function x(t) = is a solution of this homotopy. 1 + 2t Its second order Taylor series at t = 0 is s(t) = 1 − 3t=4 + 39t2=32 + O(t2). The Pad´eapproximant of degree one in numerator and denominator is q(t) = 1 + 7t=8 . In Figure 1 we see that the series approximates the function only 1 + 13t=8 in a small interval and then diverges, whereas the Pad´eapproximant is more accurate. Figure 1: Comparing a Pad´eapproximant to a series approximation. 3 1.3 Motivating Example: Viviani's Curve Figure 2: Viviani's Curve 2 2 2 Viviani's curve is defined as the intersection of the sphere x1 +x2 +x3 −4 = 0 2 2 and the cylinder (x1 − 1) + x2 − 1 = 0, shown in Figure 2. Our methods will allow us to find the Taylor series expansion around any non-isolated point of a 1-dimensional variety, assuming we have suitable starting information. For example, if we expand around the point p = (0; 0; 2) of Viviani's curve, we obtain the following series solution for x1; x2; x3: 8 2t2 < 3 1 5 1 7 5 9 7 11 21 13 33 15 2t − t − 4 t − 8 t − 64 t − 128 t − 512 t − 1024 t (2) : 2 1 4 1 6 5 8 7 10 21 12 33 14 429 16 2 − t − 4 t − 8 t − 64 t − 128 t − 512 t − 1024 t − 16384 t This solution is plotted in Figure 3 for a varying number of terms. To check the correctness, we can substitute (2) into the original equations, obtaining 1573 18 20 429 18 20 8192 t + O(t ) and 4096 t + O(t ), respectively. The vanishing of the lower- order terms confirms that we have indeed found an approximate series solution. 2 The Problem and Our Solution 2.1 Problem Setup Let f = (f1; f2; : : : ; fm) ⊆ C[t; x1; : : : ; xn] such that the solution variety V(f) is ~ ~ ~ ~ 1-dimensional. Define f = (f1; f2;:::; fm) to be the image of f under the natural ~ embedding C[t; x1; : : : ; xn] ,! C((t))[x1; : : : ; xn]. We wish to solve the system f, or in other words, compute truncated series solutions of f with t seen as the series parameter. One could of course use any variable; t is merely chosen for simplicity and without loss of generality. If there is a point p = (p0; : : : ; pn) 2 V(f) with p0 = 0, our series solution will correspond to the Taylor expansion of f around 4 Figure 3: Viviani's curve, with improving series approximations that point. If no such p exists, our solution will be a Laurent series expansion around a point at infinity. From an algebraic perspective, the more natural place to solve ~f is over the field of Puiseux series. This field is informally defined as that of fractional power series whose exponents have bounded denominators, and it is algebraically closed by the Newton-Puiseux theorem. In our computations we will be content, however, to work over the ring of formal power series C[[t]] or the field of formal Laurent series C((t)). This is because a suitable substitution t ! tk in the original equations, where k is the ramification index of the curve, removes any exponent denominators from all intermediate computations as well as from the resulting solution. The next necessary ingredient is the Jacobian matrix. Let Jf be the Jacobian ~ of f with respect to t; x1; : : : ; xn and let J~f be the Jacobian of f with respect to x1; : : : ; xn. In other words, J(·) as a function returns the Jacobian with respect to the variables in the polynomial ring of its input. The distinction may seem trivial, but it is important to distinguish carefully between the two, as some arguments will require Jf , while J~f is the Jacobian used in the Newton iteration. This brings us to our approach, which is to use Newton iteration on the system ~f. Namely, given some starting z 2 C((t))n, we will solve ~ J~f (z)∆z = −f(z) (3) for the update ∆z to z. This is a system of equations that is linear over C((t)), so the problem is well-posed. From a computational perspective, one could simply overload the operators on (truncated) power series and apply basic linear algebra techniques. Such an approach, however, is computationally expensive; the main point of our paper is that it can be avoided, which we will demonstrate in Section 2.2. 5 The last item of setup is the choice of the initial z 2 C((t))n to begin the Newton iteration.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages19 Page
-
File Size-