MARS: A Maple/Matlab/C Resultant-Based Solver
Ioannis Z. Emiris Aaron Wallack
INRIA Cognex Corp oration
B.P.93 1 Vision Drive
Sophia-Antip olis 06902, France Natick MA 01760, USA
[email protected] [email protected]
http://www.inria.fr/safir/emiris
Dinesh Mano cha
Department of Computer Science, UNC
Chap el Hill NC 27599-3175, USA
http://www.cs.unc.edu/~dm
Abstract economics and optimization [MM94 ], and molecular biol-
ogy [EM96 ]. The main op erations in these applications can
The problem of computing zeros of a system of p olynomial
b e classi ed into twotyp es. First, the simultaneous elimina-
equations has b een well studied in the computational litera-
tion of one or more variables from a given set of p olynomial
ture. Anumb er of algorithms have b een prop osed and many
equations to obtain a \symb olically smaller" system. This
computer algebra and public domain packages provide the
problem arises, for instance, in graphics and mo deling ap-
capability of computing the ro ots of p olynomial equations.
plications where the implicit expression of a curve or surface
Most of these implementations are based on Grobner bases
is precisely the resultant p olynomial. Second, the compu-
which can b e slow for even small problems. In this pap er,
tation of all numeric solutions of a system of p olynomial
we present a new system, MARS, to compute the ro ots of
equations. Our practical motivation is fast computation of
a zero dimensional p olynomial system. It is based on com-
solutions of a zero-dimensional system with 10 variables or
puting the resultant of a system of p olynomial equations
less.
followed by eigendecomp osition of a generalized companion
Elimination theory, a branch of classical algebraic geom-
matrix. MARS includes a robust library of Maple func-
etry,investigates the condition under which sets of p olyno-
tions for constructing resultant matrices, an ecient library
mials have common ro ots. Its results were known at least
of Matlab routines for numerically solving the eigenprob-
a century ago and still app ear in mo dern treatments of al-
lem, and C co de generation routines and a C library for
gebraic geometry, although often in non-constructive form.
incorp orating the numerical solver into applications. We il-
The main result is the construction of a single resultantpo-
lustrate the usage of MARS on various examples and utilize
lynomial of n homogeneous p olynomial equations in n un-
di erent resultant formulations.
knowns, such that the vanishing of the resultant is a neces-
sary but not always sucient condition for the given system
to have a nontrivial solution. This resultant is known as the
1 Intro duction
multipolynomial resultant the given system of p olynomial
equations [AS88 , Man94 , MC94 ]. The multip olynomial re-
Finding the solution of a system of nonlinear p olynomial
sultant of the system of p olynomial equations can b e used
equations over a given eld is a classical and fundamental
for eliminating the variables and computing the numeric so-
problem in the computational literature which has b een ex-
lutions of a given system of p olynomial equations. The same
tensively studied.
approach is also valid in the non-homogeneous context, as
Recently, a great deal of interest in solving nonlinear p o-
illustrated later.
lynomial systems has come from di erent applications. It
Given a zero-dimensional system, the computation of all
includes computer algebra [Ren92], rob otics [MC94 , RR95,
common solutions can be reduced to an eigenvalue prob-
WC97 ], computer graphics [Man94 ], geometric and solid
lem. In this resultant-eigendecomp osition technique, each
mo deling [Hof89 , MD95 ], computer vision [Emi97 , WM98 ],
eigenvalue corresp onds to one variable of a ro ot, and the
Permission to make digital or hard copies of all or part of this work for
asso ciated eigenvector characterizes the other variables of
p ersonal or classro om use is granted without fee provided that copies
the ro ot. This approachisvery useful for rep eatedly solv-
are not made or distributed for pro t or commercial advantage and
ing similar systems b ecause the symb olic pro cessing is only
that copies b ear this notice and the full citation on the rst page. To
copy otherwise, to republish, to p ost on servers or to redistribute to
p erformed once and numerically solving the system reduces
lists, requires prior sp eci c p ermission and/or a fee. ISSAC'98,
to instantiating co ecients of the resultant matrix followed
c
Rosto ck, Germany. 1998 ACM 1-58113-002-3/ 98/ 0008 $5.00 244
by eigendecomp osition. rent homotopy implementations and algorithms su er from
Main Contributions: The main contribution of our work many problems. The di erent paths b eing followed may not
is a software package consisting of Maple, Matlab and C b e geometrically isolated. As a result, each path has to b e
libraries for solving zero-dimensional systems a more thor- at times followed with impractically tight tolerances, which
ough review can b e found in [WEM98 ]. Given a system, slows down the overall algorithm.
MARS computes the resultant as a matrix p olynomial and Multip olynomial resultant algorithms provide the most
numerically solving the resultant matrices. MARS simpli- ecient metho ds as far as asymptotic complexityis con-
es the task of incorp orating a numerical multip olynomial cerned for solving a system of p olynomial equations by
solver into a user's application. We present a number of eliminating variables. One of their main advantages is the
issues in the design and implementation of this library and fact that the resultant can always b e expressed in terms of
highlight its p erformance on a numb er of examples. matrices and determinants. We will later describ e di erent
The rest of the pap er is organized as follows. The next techniques for construction of resultant matrices b elow. Sys-
section discusses alternate approaches, existing implemen- tems suchas Axiom, Maple, Mathematica and Reduce
tations and their limitations. Section 3 outlines the main only o er matrix expressions for the resultantoftwo univari-
approach in using resultant matrices for reducing system- ate p olynomials, either as Sylvester's matrix or as B ezout's
solving to a problem in linear algebra. In particular, sub- matrix. Some use of resultants can also b e found in other
section 3.1 mentions the di erent matrix formulations and systems, suchasCASA, develop ed at RISC-Linz.
how they are constructed, whereas the following subsection Di erent sp ecialized mo dules based on resultant ma-
shows the matrix op erations, typically p erformed numeri- trices exist for solving systems of p olynomial equations,
cally, applied to approximate all common ro ots. Section 4 e.g. [CGT97 , CP93 , Emi97, KM95 , KS96, MP97 , Reg95 ].
describ es the main architects of our library and the pack- Typically, these programs would rely on Linpack, Eispack,
age's organization, and section 5 discusses implementation Lapack,orMatlab for their numerical calculations. All of
details and features of the MARS package. We illustrate the these programs implement one or, exceptionally,two kinds
power and adaptability of our library, including the available of matrices, and are not designed for wide distribution, so
interfaces and the automatic generation of C co de, in sec- they lack in user-friendliness. There is currently a very in-
tion 6 by studying concrete examples. Section 7 reviews the teresting e ort in the context of FRISCO for developing a
p erformance and practical complexityof MARS.We sum- general library of resultant functions in C++, to which the
marize and conclude with further work in section 8. second author is participating.
2 Related work 3 Resultant-based system solving
There is a long history of using resultant-based approaches There is more than one way to solve arbitrary p olynomial
to study and solve systems of p olynomial equations. Re- systems by using resultants, yet here we fo cus on the one
cently, certain practical results that have established re- metho d presenting the strongest practical interest. Namely,
sultants, along with Grobner bases and continuation tech- we are interested in constructing resultant matrices, whose
niques, as a metho d of choice in solving zero-dimensional determinants express nontrivial multiples of the resultant
p olynomial systems. For systems of medium size, the appli- p olynomial and which, furthermore, reduce the computation
cations highlighted earlier illustrate the comparative advan- of all common zeros to a problem in linear algebra. The
tages of resultant-based metho ds: resultants can strongly symb olic part of matrix construction can strongly exploit
exploit p olynomial structure, they reduce the nonlinear p olynomial structure, whereas the manipulation of the ma-
problem to one in linear algebra, and combine a symb olic trices b ene ts from the current state-of-the-art in numerical
with a numeric approach. linear algebra. Belowweoverview b oth stages and explain
Grobner bases have b een studied for a longer time and of- how to reduce the nonlinear problem to the computation of
fer an array of general implementations to eciently handle eigenvalues and eigenvectors of a square matrix.
zero-dimensional systems. For the purp oses of illustration,
we mention only very few representatives, namely GB [Fau95 ]
3.1 Symb olic computation
and the PoSSo/FRISCO library [FRI97 ]. Most computer al-
The computation of resultants typically relies on construct-
gebra systems, like Axiom, Mathematica, Maple and Re-
ing matrices whose determinant is either the exact resultant
duce have a package for computing the Grobner bases of an
p olynomial or, more generally, a nontrivial multiple of it. In
ideal. One of the main drawbacks of using Grobner bases is
addition, for solving p olynomial systems these matrices are
that the metho d may b e slow for even small problems. Mo-
sucient, since they reduce the given nonlinear problem to
tivated by the need for faster implementations, some sp ecial
a question in linear algebra. For details see [KL92 , EM97 ].
systems have b een develop ed exclusively for Grobner bases
Resultant matrices can b e classi ed into two large fam-
computation, including Macaulay and Cocoa.
ilies. The rst typ e includes Sylvester matrices and their
Other numerical techniques exist based on iterative al-
classic generalization by Macaulay. In the context of
gorithms and homotopy metho ds. Iterative techniques, like
sparse elimination theory, there are matrices that general-
Newton's metho d, are go o d for lo cal analysis and work well
ize Sylvester's and Macaulay's formulations and known as
if we are given go o d initial guesses near the solutions. This
sparse, or toric, resultant matrices. The second typ e of re-
is a rather dicult prerequisite for most applications. Ho-
sultant matrices includes B ezout matrices and their gener-
motopy metho ds have a good theoretical background and
alizations.
pro ceed by following paths in the complex space. In the-
ory, each path converges to a geometrically isolated solution.
They have b een implemented and tried on a variety of appli-
cations. e.g. [MSW94 , VVC94]. In practice, however, cur- 245
3.1.1 Sylvester and Macaulay matrices theory b ehind these matrices, based on algebraic residues,
which shows that B ezout matrices b ehave b etter for several
The Sylvester resultant is a widely known resultant formula-
degenerate input systems. Moreover, B ezout's matrix has
tion for systems of two univariate p olynomials. In this case,
smaller size than Macaulay's and the sparse resultant ma-
the resultant equals the determinant of the Sylvester matrix.
trix. On the downside, its entries are p olynomials in the
If d = deg f , for i = 1; 2, the rows of the Sylvester ma-
i i
input co ecients. Another di erence is that the matrices of
d 1
2
trix express p olynomials f x;xf x;:::;x f x and
1 1 1
Sylvester typ e are constructed combinatorially, whereas the
d 1
1
f x;xf x;:: :;x f x. The matrix columns are in-
2 2 2
B ezout matrix construction is based on discrete di erentials
d +d 1
1 2
dexed by the monomials f1; x;:::;x g. Example 3.1
and requires some p olynomial computation.
b elow illustrates this approach. This is a widespread to ol
for variable elimination even in the case of several variables.
3.2 Numerical solving
Then Sylvester's construction is applied rep eatedly, alb eit
The problem addressed here is to nd all the common
with a high overhead, b ecause this technique intro duces
ro ots of a system of n non-homogeneous p olynomials in
many sup er uous solutions.
n variables. Such a system is known as a square or
Macaulay devised a metho d that generalizes Sylvester's
well-constrained system, and typically has only a nite
construction to systems of an arbitrary numb er of p olyno-
number of isolated ro ots. Our metho d reduces solving a
mials, under the hyp othesis that these p olynomials are com-
zero-dimensional system to either a regular or a general-
pletely dense [Mac02 ]. More formally, given n + 1 dense non-
ized eigenproblem, thus transforming the nonlinear ques-
homogeneous p olynomials in n variables, where the total de-
tion to a problem in linear algebra. This is a classical
gree of the i-th p olynomial is denoted by d , i =1; :::;n+1,
i
technique that enables us to approximate all solutions; see
the matrix columns are indexed by all monomials in the in-
P
n+1
e.g. [Man94 , Emi97] and the references thereof. Several
d n. The put variables whose degree is b ounded by
i
i=1
extensions to p ositive-dimensional systems have b een ex-
matrix rows express monomial multiples of the input p olyno-
plored [KM95 ] or are currently under investigation.
mials f . The entries are either zero or equal to a co ecient
i
An overconstrained system is obtained by adding extra
of some input p olynomial. In the currentversion of MARS,
p olynomial f x; u to the given system f x;:::;f x,
n+1 1 n
Macaulay's formulation is used whenever we compute a u-
where x =x ;::: ;x . Wecho ose f x; u to b e linear
1 n n+1
resultant.
with random co ecients and constant term equal to inde-
terminate u. Let M b e the resultant matrix of the overcon-
3.1.2 Sparse resultant matrices
strained system, built byany metho d discussed ab ove. The
vanishing of detM is a necessary condition for the over-
Resultants in classical elimination theory, as well as
constrained system to have common ro ots. In this case, the
Macaulay matrices, are completely de ned by the total de-
resultant of the overconstrained system is a function of u
grees of the input p olynomials. More recently, sparse elimi-
and is known as the u-resultant. Partition M so that the
nation theory has mo deled p olynomials by the sets of their
upp er left square submatrix M dep ends on u. By the con-
nonzero monomials, or supp orts, in order to obtain tighter
11
n
struction of M and for arbitrary 2 C ,evaluation of the
b ounds and exploit sparseness. Polynomials are sp eci ed by
row p olynomials at is expressed byvector multiplication
their supp ort and its convex hull, known as Newton p oly-
on the right:
top e. Sparse elimination de nes the sparse, or toric, resul-
tant, whose degree dep ends on these convex p olytop es in-
3 3 2 2
. .
. .
stead of the total degrees. Canny et al. describ ed a construc-
. .
7 7 6 6 M u M
11 12
q
tion based on a mixed sub division of the Newton p olytop es
g ; u
; =
5 5 4 4
M M
21 22
of the input p olynomial system [CE93 , CP93 ]. A direct
. .
. .
incremental metho d yields smaller matrices [EC95 , Emi97].
. .
n
where q 2 Z ranges over all column monomials and
3.1.3 B ezout matrices
g x; u ranges over all row p olynomials. Clearly, if is a
common ro ot of the input well-constrained system and u
The second branch of resultant matrix constructions stems
takes the values that make det M uvanish, then the vec-
from B ezout's metho d for the resultant of two univariate
tor must lie in the kernel of M and hence in the kernel of
p olynomials. Let these p olynomials b e f x;f x, of de-
1 2