<<

MARS: A Maple/Matlab/ Resultant-Based Solver

Ioannis Z. Emiris Aaron Wallack

INRIA Cognex Corp oration

B.P.93 1 Vision Drive

Sophia-Antip olis 06902, France Natick MA 01760, USA

[email protected] [email protected]

http://www.inria.fr/safir/emiris

Dinesh Mano cha

Department of Computer Science, UNC

Chap el Hill NC 27599-3175, USA

[email protected]

http://www.cs.unc.edu/~dm

Abstract economics and optimization [MM94 ], and molecular biol-

ogy [EM96 ]. The main op erations in these applications can

The problem of computing zeros of a system of p olynomial

e classi ed into twotyp es. First, the simultaneous elimina-

equations has b een well studied in the computational litera-

tion of one or more variables from a given set of p olynomial

ture. Anumb er of algorithms have b een prop osed and many

equations to obtain a \symb olically smaller" system. This

and public domain packages provide the

problem arises, for instance, in graphics and mo deling ap-

capability of computing the ro ots of p olynomial equations.

plications where the implicit expression of a curve or surface

Most of these implementations are based on Grobner bases

is precisely the resultant p olynomial. Second, the compu-

which can b e slow for even small problems. In this pap er,

tation of all numeric solutions of a system of p olynomial

we present a new system, MARS, to compute the ro ots of

equations. Our practical motivation is fast computation of

a zero dimensional p olynomial system. It is based on com-

solutions of a zero-dimensional system with 10 variables or

puting the resultant of a system of p olynomial equations

less.

followed by eigendecomp osition of a generalized companion

Elimination theory, a branch of classical algebraic geom-

. MARS includes a robust library of Maple func-

etry,investigates the condition under which sets of p olyno-

tions for constructing resultant matrices, an ecient library

mials have common ro ots. Its results were known at least

of Matlab routines for numerically solving the eigenprob-

a century ago and still app ear in mo dern treatments of al-

lem, and C co de generation routines and a C library for

gebraic geometry, although often in non-constructive form.

incorp orating the numerical solver into applications. We il-

The main result is the construction of a single resultantpo-

lustrate the usage of MARS on various examples and utilize

lynomial of n homogeneous p olynomial equations in n un-

di erent resultant formulations.

knowns, such that the vanishing of the resultant is a neces-

sary but not always sucient condition for the given system

to have a nontrivial solution. This resultant is known as the

1 Intro duction

multipolynomial resultant the given system of p olynomial

equations [AS88 , Man94 , MC94 ]. The multip olynomial re-

Finding the solution of a system of nonlinear p olynomial

sultant of the system of p olynomial equations can b e used

equations over a given eld is a classical and fundamental

for eliminating the variables and computing the numeric so-

problem in the computational literature which has b een ex-

lutions of a given system of p olynomial equations. The same

tensively studied.

approach is also valid in the non-homogeneous context, as

Recently, a great deal of interest in solving nonlinear p o-

illustrated later.

lynomial systems has come from di erent applications. It

Given a zero-dimensional system, the computation of all

includes computer algebra [Ren92], rob otics [MC94 , RR95,

common solutions can be reduced to an eigenvalue prob-

WC97 ], computer graphics [Man94 ], geometric and solid

lem. In this resultant-eigendecomp osition technique, each

mo deling [Hof89 , MD95 ], computer vision [Emi97 , WM98 ],

eigenvalue corresp onds to one variable of a ro ot, and the

Permission to make digital or hard copies of all or part of this work for

asso ciated eigenvector characterizes the other variables of

p ersonal or classro om use is granted without fee provided that copies

the ro ot. This approachisvery useful for rep eatedly solv-

are not made or distributed for pro t or commercial advantage and

ing similar systems b ecause the symb olic pro cessing is only

that copies b ear this notice and the full citation on the rst page. To

copy otherwise, to republish, to p ost on servers or to redistribute to

p erformed once and numerically solving the system reduces

lists, requires prior sp eci c p ermission and/or a fee. ISSAC'98,

to instantiating co ecients of the resultant matrix followed

c

Rosto ck, Germany. 1998 ACM 1-58113-002-3/ 98/ 0008 $5.00 244

by eigendecomp osition. rent homotopy implementations and algorithms su er from

Main Contributions: The main contribution of our work many problems. The di erent paths b eing followed may not

is a software package consisting of Maple, Matlab and C b e geometrically isolated. As a result, each path has to b e

libraries for solving zero-dimensional systems a more thor- at times followed with impractically tight tolerances, which

ough review can b e found in [WEM98 ]. Given a system, slows down the overall algorithm.

MARS computes the resultant as a matrix p olynomial and Multip olynomial resultant algorithms provide the most

numerically solving the resultant matrices. MARS simpli- ecient metho ds as far as asymptotic complexityis con-

es the task of incorp orating a numerical multip olynomial cerned for solving a system of p olynomial equations by

solver into a user's application. We present a number of eliminating variables. One of their main advantages is the

issues in the design and implementation of this library and fact that the resultant can always b e expressed in terms of

highlight its p erformance on a numb er of examples. matrices and determinants. We will later describ e di erent

The rest of the pap er is organized as follows. The next techniques for construction of resultant matrices b elow. Sys-

section discusses alternate approaches, existing implemen- tems suchas Axiom, Maple, Mathematica and Reduce

tations and their limitations. Section 3 outlines the main only o er matrix expressions for the resultantoftwo univari-

approach in using resultant matrices for reducing system- ate p olynomials, either as Sylvester's matrix or as B ezout's

solving to a problem in linear algebra. In particular, sub- matrix. Some use of resultants can also b e found in other

section 3.1 mentions the di erent matrix formulations and systems, suchasCASA, develop ed at RISC-Linz.

how they are constructed, whereas the following subsection Di erent sp ecialized mo dules based on resultant ma-

shows the matrix op erations, typically p erformed numeri- trices exist for solving systems of p olynomial equations,

cally, applied to approximate all common ro ots. Section 4 e.g. [CGT97 , CP93 , Emi97, KM95 , KS96, MP97 , Reg95 ].

describ es the main architects of our library and the pack- Typically, these programs would rely on Linpack, Eispack,

age's organization, and section 5 discusses implementation Lapack,orMatlab for their numerical calculations. All of

details and features of the MARS package. We illustrate the these programs implement one or, exceptionally,two kinds

power and adaptability of our library, including the available of matrices, and are not designed for wide distribution, so

interfaces and the automatic generation of C co de, in sec- they lack in user-friendliness. There is currently a very in-

tion 6 by studying concrete examples. Section 7 reviews the teresting e ort in the context of FRISCO for developing a

p erformance and practical complexityof MARS.We sum- general library of resultant functions in C++, to which the

marize and conclude with further work in section 8. second author is participating.

2 Related work 3 Resultant-based system solving

There is a long history of using resultant-based approaches There is more than one way to solve arbitrary p olynomial

to study and solve systems of p olynomial equations. Re- systems by using resultants, yet here we fo cus on the one

cently, certain practical results that have established re- metho d presenting the strongest practical interest. Namely,

sultants, along with Grobner bases and continuation tech- we are interested in constructing resultant matrices, whose

niques, as a metho d of choice in solving zero-dimensional determinants express nontrivial multiples of the resultant

p olynomial systems. For systems of medium size, the appli- p olynomial and which, furthermore, reduce the computation

cations highlighted earlier illustrate the comparative advan- of all common zeros to a problem in linear algebra. The

tages of resultant-based metho ds: resultants can strongly symb olic part of matrix construction can strongly exploit

exploit p olynomial structure, they reduce the nonlinear p olynomial structure, whereas the manipulation of the ma-

problem to one in linear algebra, and combine a symb olic trices b ene ts from the current state-of-the-art in numerical

with a numeric approach. linear algebra. Belowweoverview b oth stages and explain

Grobner bases have b een studied for a longer time and of- how to reduce the nonlinear problem to the computation of

fer an array of general implementations to eciently handle eigenvalues and eigenvectors of a square matrix.

zero-dimensional systems. For the purp oses of illustration,

we mention only very few representatives, namely GB [Fau95 ]

3.1 Symb olic computation

and the PoSSo/FRISCO library [FRI97 ]. Most computer al-

The computation of resultants typically relies on construct-

gebra systems, like Axiom, Mathematica, Maple and Re-

ing matrices whose determinant is either the exact resultant

duce have a package for computing the Grobner bases of an

p olynomial or, more generally, a nontrivial multiple of it. In

ideal. One of the main drawbacks of using Grobner bases is

addition, for solving p olynomial systems these matrices are

that the metho d may b e slow for even small problems. Mo-

sucient, since they reduce the given nonlinear problem to

tivated by the need for faster implementations, some sp ecial

a question in linear algebra. For details see [KL92 , EM97 ].

systems have b een develop ed exclusively for Grobner bases

Resultant matrices can b e classi ed into two large fam-

computation, including Macaulay and Cocoa.

ilies. The rst typ e includes Sylvester matrices and their

Other numerical techniques exist based on iterative al-

classic generalization by Macaulay. In the context of

gorithms and homotopy metho ds. Iterative techniques, like

sparse elimination theory, there are matrices that general-

Newton's metho d, are go o d for lo cal analysis and work well

ize Sylvester's and Macaulay's formulations and known as

if we are given go o d initial guesses near the solutions. This

sparse, or toric, resultant matrices. The second typ e of re-

is a rather dicult prerequisite for most applications. Ho-

sultant matrices includes B ezout matrices and their gener-

motopy metho ds have a good theoretical background and

alizations.

pro ceed by following paths in the complex space. In the-

ory, each path converges to a geometrically isolated solution.

They have b een implemented and tried on a variety of appli-

cations. e.g. [MSW94 , VVC94]. In practice, however, cur- 245

3.1.1 Sylvester and Macaulay matrices theory b ehind these matrices, based on algebraic residues,

which shows that B ezout matrices b ehave b etter for several

The Sylvester resultant is a widely known resultant formula-

degenerate input systems. Moreover, B ezout's matrix has

tion for systems of two univariate p olynomials. In this case,

smaller size than Macaulay's and the sparse resultant ma-

the resultant equals the determinant of the Sylvester matrix.

trix. On the downside, its entries are p olynomials in the

If d = deg f , for i = 1; 2, the rows of the Sylvester ma-

i i

input co ecients. Another di erence is that the matrices of

d 1

2

trix express p olynomials f x;xf x;:::;x f x and

1 1 1

Sylvester typ e are constructed combinatorially, whereas the

d 1

1

f x;xf x;:: :;x f x. The matrix columns are in-

2 2 2

B ezout matrix construction is based on discrete di erentials

d +d 1

1 2

dexed by the monomials f1; x;:::;x g. Example 3.1

and requires some p olynomial computation.

b elow illustrates this approach. This is a widespread to ol

for variable elimination even in the case of several variables.

3.2 Numerical solving

Then Sylvester's construction is applied rep eatedly, alb eit

The problem addressed here is to nd all the common

with a high overhead, b ecause this technique intro duces

ro ots of a system of n non-homogeneous p olynomials in

many sup er uous solutions.

n variables. Such a system is known as a square or

Macaulay devised a metho d that generalizes Sylvester's

well-constrained system, and typically has only a nite

construction to systems of an arbitrary numb er of p olyno-

number of isolated ro ots. Our metho d reduces solving a

mials, under the hyp othesis that these p olynomials are com-

zero-dimensional system to either a regular or a general-

pletely dense [Mac02 ]. More formally, given n + 1 dense non-

ized eigenproblem, thus transforming the nonlinear ques-

homogeneous p olynomials in n variables, where the total de-

tion to a problem in linear algebra. This is a classical

gree of the i-th p olynomial is denoted by d , i =1; :::;n+1,

i

technique that enables us to approximate all solutions; see

the matrix columns are indexed by all monomials in the in-

P

n+1

e.g. [Man94 , Emi97] and the references thereof. Several

d n. The put variables whose degree is b ounded by

i

i=1

extensions to p ositive-dimensional systems have b een ex-

matrix rows express monomial multiples of the input p olyno-

plored [KM95 ] or are currently under investigation.

mials f . The entries are either zero or equal to a co ecient

i

An overconstrained system is obtained by adding extra

of some input p olynomial. In the currentversion of MARS,

p olynomial f x; u to the given system f x;:::;f x,

n+1 1 n

Macaulay's formulation is used whenever we compute a u-

where x =x ;::: ;x . Wecho ose f x; u to b e linear

1 n n+1

resultant.

with random co ecients and constant term equal to inde-

terminate u. Let M b e the resultant matrix of the overcon-

3.1.2 Sparse resultant matrices

strained system, built byany metho d discussed ab ove. The

vanishing of detM is a necessary condition for the over-

Resultants in classical elimination theory, as well as

constrained system to have common ro ots. In this case, the

Macaulay matrices, are completely de ned by the total de-

resultant of the overconstrained system is a function of u

grees of the input p olynomials. More recently, sparse elimi-

and is known as the u-resultant. Partition M so that the

nation theory has mo deled p olynomials by the sets of their

upp er left square submatrix M dep ends on u. By the con-

nonzero monomials, or supp orts, in order to obtain tighter

11

n

struction of M and for arbitrary 2 C ,evaluation of the

b ounds and exploit sparseness. are sp eci ed by

row p olynomials at is expressed byvector multiplication

their supp ort and its convex hull, known as Newton p oly-

on the right:

top e. Sparse elimination de nes the sparse, or toric, resul-

tant, whose degree dep ends on these convex p olytop es in-

3 3 2 2

. .

. .

 

stead of the total degrees. Canny et al. describ ed a construc-

. .

7 7 6 6 M u M

11 12

q

tion based on a mixed sub division of the Newton p olytop es

g  ; u

; =

5 5 4 4

M M

21 22

of the input p olynomial system [CE93 , CP93 ]. A direct

. .

. .

incremental metho d yields smaller matrices [EC95 , Emi97].

. .

n

where q 2 Z ranges over all column monomials and

3.1.3 B ezout matrices

g x; u ranges over all row p olynomials. Clearly, if is a

common ro ot of the input well-constrained system and u

The second branch of resultant matrix constructions stems

takes the values that make det M uvanish, then the vec-

from B ezout's metho d for the resultant of two univariate

tor must lie in the kernel of M and hence in the kernel of

p olynomials. Let these p olynomials b e f x;f x, of de-

1 2

1

0

M u=M u M M M . After suitable transforma-

11 12 21

grees d  d resp ectively, and let y be a new variable.

22

1 2

tions the vector corresp onds to the eigenvector of another

Consider

square matrix, hence reducing ro ot- nding to an eigenprob-

 

d 1

1

f xf y 

1 1

lem. In section 5.2, we will describ e an algorithm that, given

X

f x

1

i

xy

= B xy : det

an eigenvector and the column monomials, computes the co-

i

f xf y 

2 2

f x

2

xy

ordinates of ro ot . Currently in MARS, u-resultant matri-

i=0

ces are always computed by Macaulay's construction.

The B x are p olynomials in x and de ne the rows of

i Wehave reduced ro ot nding to a problem in linear al-

the resultant matrix, whereas the columns are indexed by

gebra by adding the u-form to the given well-constrained

d 1

1

monomials f1;x;:::;x g. Hence, we de ne a square ma-

system. An alternative is to \hide" one of the n variables

trix of dimension d ,thus smaller than Sylvester's matrix.

1 in the co ecient eld. This pro duces an overconstrained

B ezout's matrix has b een generalized to arbitrary sys-

system without increasing the problem dimension. Our ex-

tems. Although the ab ove metho d do es not always yield a

p erience with systems in rob otics and vision suggests that

square matrix, it is always p ossible to de ne a square max-

this is preferable in many practical situations. Formally,we

imal submatrix whose determinant is a nontrivial multiple

consider the given p olynomials as

of the resultant [CM96 , EM97 ]. There is a rich algebraic

f ;:::;f 2 Q[x ] [x ;::: ;x ]:

1 n n 1 n1 246

Variable x is chosen so that all ro ots are separated For every eigenvalue  with asso ciated right eigenvec-

n

i1

by pro jection on x , if p ossible. Otherwise, we have to tor w = [v ;:::;v ] of C y , we have v =  v for

n 1 i 1

d

deal with the case of multiple ro ots. The construction i =2;:: :;d. Moreover, Ay  has the same eigenvalue  and

of M is as b efore, and any algorithm can be used. Row has right eigenvector v . These are all standard op erations

1

and column p ermutations do not a ect the matrix prop- in .

0

erties so we apply them to obtain a minimal M . Gaus- We can compute the x co ordinates the values of the

sian elimination of all columns which are constant with re- eliminated variables from the eigenvector. MARS assumes

0

sp ect to x is now p ossible, essentially giving M x  = that each eliminated variable can b e computed as the ratio of

n n

1

two eigenvector elements raised to some p ower. We use the

M x  M x M M x : System solving is again

11 n 12 n 21 n

22

0

term extraction recipe to denote the pro cess of computing

reduced to an eigendecomp osition of square matrix M .

the eliminated variables from the eigenvectors. Generating

extraction recip es involves determining which eigenvector el-

Example 3.1 We il lustrate hiding a variable in the case of

ements are to b e divided and to what p ower the quotient

a system of two polynomials, by the use of Sylvester's matrix

should b e raised in order to extract/compute the values of

hiding y :

each eliminated variable.

 

2

With resp ect to example 3.1, each generalized eigen-

f x= x +6x+3y 4

1

then

2 2 T

2

vector of C is of the form [x ;x ;1;yx ;yx ;y;:::] , where

f x= 2x+y 7y+5

y y

2

y y

"  " "  x is a function of the generalized eigenvalue y . We can

y

2

1 6 3y 4

f x; y 

x

1

compute the x value corresp onding to each y value by divid-

2

2

2 y 7y +5 0 xf x; y 

x =

2

ing the rst element of the eigenvector, namely x ,by the

y

2

f x; y 

1

0 2 y 7y +5

2

second element of the eigenvector, namely x .

y

Sp ecial attention is required when there are ro ots of high

If we specialize x and y at the roots, the product vector wil l be

multiplicity which give rise to eigenspaces of high dimension.

zero. Inversely, to solve the system it suces to nd the val-

A more serious problem arises when the matrix determinant

ues of y for which the matrix is and to compute the

vanishes for all values of the hidden variable and is, there-

nonzerovectors in its kernel. Among these vectors we shal l

fore, a trivial multiple of the resultant p olynomial. A fast,

restrict attention to those that correspond to a specialization

easy to use package suchas MARS will b e useful for quickly

of x. This is equivalent to solving the fol lowing problem:

identifying trivial resultants. Currentwork is concentrating

 "  "  " !

on numeric metho ds for transforming the matrix problem

0 0 0 0 0 3 1 6 4

in a numerically stable way so that multiple ro ots are iden-

2

0 1 0 0 7 0 2 5 0

y + y + v =0

ti ed and degeneracies are avoided. Another approach is

0 0 1 0 0 7 0 2 5

based on p erturbation techniques. For more information

see [KL92 , CE93 , Man94 , MD95 , Mou97 ] and the references

This can betransformed to an eigenproblem by perform-

thereof.

ing certain matrix operations as explained below. Among

the computed eigenvectors we shal l restrict attention to those

4 MARS description

that correspond to a specialization of x in order to extract

the roots.

In this section, we presentanoverview of the MARS pack-

age. Then, we discuss the design goals.

In contrast to the approach of adding a u-p olynomial, the

resultant matrix may b e nonlinear in the hidden variable, so

0 d

+  + A x + A , for M x  is matrix p olynomial A x

1 n 0 n

d

4.1 MARS Architecture

n

some d  1. If A is numerically nonsingular, we reduce the

d

1 1

d d1

The MARS package is implemented in Maple, Matlab,

+ A +  + A equation Ix A x A v = 0 to the

0

d1

n n

d d

and C. Wechose to use three environments b ecause we could

following eigenproblem:

not nd a standalone package which provided high symb olic,

2 3

numerical, and application p erformance. In all three envi-

0 I 0  0

ronments, MARS uses resultant ob jects to characterize re-

. .

6 7

. .

6 7

. .

sultant matrices.

w=x w;

n

4 5

0 0   0 I

Although resultant ob jects are implemented di erently

1 1 1 1

in all three environments, all three implementations con-

A A A A   A A A A

0 1

d2 d1

d d d d

tain enough information to reconstruct the original system

d1

and the resultant matrix p olynomial function array, vari-

where w = [v; x v;: ::;x v] and each eigenvalue of the

n

n

able array, hidden variable, resultant matrix p olynomial and

latter matrix corresp onds to the y value of a common ro ot.

extraction recip e; consequently,we can reconstruct a resul-

If A is numerically singular, we may change variable

d

tant ob ject in one environment from a resultant in another

x to t y + t =t y + t  for random t ;: ::;t and new

n 1 2 3 4 1 4

environment, simplifying the task of transferring resultant

variable y . This rank-balancing is used in MARS in order

ob jects between the environments. Furthermore, making

to improve the conditioning of the leading matrix A . If the

d

the resultant ob jects self-contained allows these ob jects to

latter is still nonsingular, wehave to consider a generalized

b e passed \by-value".

eigenproblem on the following matrix p encil:

The MARS package works as follows: The Maple resul-

2 2 3 3

I 0 I tant formulation routines construct resultant ob jects, which

are then passed to the Matlab and C libraries to b e solved

. 6 . 6 7 7

. .

. 6 . 6 7 7

y+ : C y =

numerically.

4 5 4 5

I I

A A A  A

0 1

d d1 247

Maple Produces

Utility Resultant Resultant is Solved 4.2.3 Numerically in Matlab/C

Resultant

MARS to implementa multip olynomial solver into

Object is Using

volves two steps: Sylvester Transferred Rank balancing an application in From Maple

to Matlab/C

1. Prototyping the resultant strategy determining which

Macaulay Eigendecomposition

system formulation and resultant construction to use;

this step often involves trying di erent formulations UResultant Roots from

Eigenvectors

2. Generating \pro duction co de" to b e incorp orated into

Bezout an application

Prune Extraneous

order to simplify the prototyping step, we designed all

Solns In

the resultant construction functions to use a consistent

Sparse of

interface, so that they could b e easily interchanged. In or-

der to simplify the pro duction step, we also designed all of

the numerical solver routines in Matlab and C to use a

Figure 1: The Architecture of the MARS system

consistent interface, so that they can also be easily inter-

changed. In addition, we implemented Maple routines for

automatically generating C source co de corresp onding to a

4.2 Design goals

resultant matrix p olynomial.

We implemented MARS in order to simplify the pro cess

of using resultant eigendecomp osition techniques to solve

5 Implementation

multinomial systems. Consequently,wewant MARS to b e

In this section, we describ e the implementation details of the

b oth easy to use and ecient; this translated into ve design

MARS package: common interface, ro ot extraction, chang-

goals: Platform indep endence achieved by using Matlab

ing of variables, avoiding sup er uous ro ots, and genericity.

Maple and C, Ease of use see section 6.2, Performance,

Functionality, and Utilty.

5.1 Common resultantinterface

4.2.1 Performance

One of the design goals was to provide a consistent API for

the resultant formulation routines to facilitate switching

In order for MARS to be useful, must p erform symb olic

formulations. Each of the Maple resultant construction

computation to construct resultants and numerical com-

functions adheres to the following function prototyp e

putation to \solve" resultants quickly and accurately. Un-

fortunately,we could not nd one standalone mathematical

"Resultant-constructor"

package which simultaneously provided high p erformance

"function-array",

for b oth symb olic pro cessing and numerical computation.

"eliminated-variable-array",

Consequently, we decided to use a combination of three

"hidden-variable",

packages, Maple to p erform symb olic computation, Mat-

"optional-debugging-flag"

lab to p erform numerical computations, and C for program-

ming applications. One reason we selected this particular

combination is that Matlab and Maple already provide an

5.2 Generating extraction recip es

easy-to-use interface b etween the two systems. Matlab cur-

We generate the extraction recip es using the resultant

rently includes a Maple kernel to do symb olic pro cessing,

2 T

columns i.e., [x ;x;1] in example 3.1. We consider each

and also provides a top-level Matlab command maple

pair of column elements and check if the ratio corresp onds

to execute Maple function calls. Furthermore, Maple pro-

to an exp onent of the desired eliminated variable; if so, then

vides a function for generating C co de.

the extraction recip e corresp onds to the ratio of these two

eigenvector elements raised to the inverse p ower.

4.2.2 Functionality

In example 3.1, x can b e computed by raising the rst

2

column or rst vector element x to the rst p ower and

In addition to providing top-level Matlab commands, we

dividing that exp onent by the second column or second

also provide lower-level commands and functionality, suchas

2

x

computing the Newton p olytop e. MARS was designed to

. vector element x to the rst p ower 

x

simplify the task of incorp orating resultant techniques into

For homogeneous systems, we need to employ a \trick"

applications. Many applications involve rep eatedly solving

b ecause all of the monomials have the same total degree; the

amultip olynomial system, and these applications are b est

trick is to sp ecify that one of the variables is identically 1,

handled by creating a generic multip olynomial systems in

and then we enforce the trickby scaling the eigenvector at

terms of generic co ecients, constructing a generic matrix

runtime.

resultant, and then instantiating and solving the generic ma-

trix resultant. For these applications, MARS resultant con-

5.3 Generic linear change of variables

struction functions can handle symb olic variables; MARS

also provides functions for automatically generating C co de

For certain \degenerate" multip olynomial inputs, resultant

corresp onding to a matrix resultant.

techniques can b e susceptible to numerical imprecision, and

it is well known that a random change of variables amelio-

rates these precision problems. As such, MARS includes

routines for applying a generic linear change of variables. 248

MARS takes the original multip olynomial system F X  de- 6.1

ned in terms of a variable set X and constructs a new

MARS provides easy-to-use interfaces to top-level Mat-

0 0

multip olynomial system F X  de ned in terms of another

lab routines for solving multip olynomial systems using

0 0

variable set X where X and X are related according to

all four resultant constructions Sylvester, u-resultant,

the following linear transformation M is a random rotation

Dixon/B ezout, and Sparse for systems of two and three

matrix.

equations in two and three unknowns resp ectively. These

 

top-level Matlab routines takeas inputs the p olynomials

0 0 0

=[M][X]  FX=F X  X

and the variables, and output a matrix were eachrow cor-

resp onds to a computed ro ot; they are illustrated b elow.

Then, MARS numerically solves the new multip olyno-

0 0 0

mial system F X  and returns a set of common ro ots U .

6.2 Examples

Finally,we compute the ro ots U of the original system F X 

byinverting the map M .

For illustration, we use MARS to nd the common ro ots

 

of the system of two equations studied ab ove and in-

0 1

U [U ]=[M]

tro duced in example 3.1: f1x,y = x*x+6*x+3*y-4 and

f2x,y = y*y+2*x-7*y+5. Here we use MARS functions

5.4 Pruning extraneous solutions

uresultantglt2 with the appropriate arguments.

Example 6.1 Using u-resultant formulation and generic

Since resultant techniques usually corresp ond to necessary

linear transforms to solve a system in two variables

but not sucient conditions for common ro ots, resultant

techniques often generate a sup erset of the desired ro ots

 to print out all of the results

of the multip olynomial system. In addition, the numer-

>> format long g

ical eigendecomp osition routines compute ro ots over the

>> uresultantglt2'x*x+6*x+3*y-4' ,

complex plane, whereas most users only care ab out the

'y*y+2*x-7*y+5','x','y','u'

real ro ots. For this reason, we provide two functions

ans =

for pruning non-ro ots and complex ro ots: Matlab func-

0.214777517761955 0.888401837097428

tions removecomplex and removenonroot. Further-

-7.04453580733046 -1.11942329

more, we allow the user to set two global Matlab variables,

We can nd the common ro ots of the system of

only real and only root, to sp ecify whether they always

three equations: f1x,y,z = x*x+6*x+3*y+6*z-4,

want extraneous solutions to b e pruned.

f2x,y,z =y*y+2*x-7*y+5+2*z,

removecomplex uses the following criteria:

f3x,y,z = x*x+y*y+z*z-1,by calling uresultantglt3

absImx_{i,j}>1e-6*max1,abs Re x_{i, j} .

removenonroot uses the following criteria:

Example 6.2 Using u-resultant formulation and generic

absfnx_{i,j}>1e-6*max1,abs |x_ {i,j} |.

linear transforms to solve a system in three variables

Notice that these naive approaches may incorrectly

>> uresultantglt3'x*x+6*x +3*y +6* z-4' ,'y* y+2 *x-7 *y+ 5+2* z',

prune complex conjugate solutions corresp onding to double

'x*x+y*y+z*z-1','x','y' ,'z ','u '

ro ots; we are currently investigating extensions the imple-

ans =

mentation to overcome this limitation.

-0.197730522175298 0.888778516434436 0.413491704058126

0.417407913885499 0.881547960575403 -0.220553455268931

5.5 Function Genericitization

6.3 Using Resultants of Generic Functions

In order to eciently rep eatedly solve the similar systems

using resultant techniques, one must construct a generic

In most resultant applications, constructing the resultant

multip olynomial which will b e sp ecialized for each instan-

via symb olic pro cessing takes much longer than numerically

tiation to be solved. We the authors usually gener-

solving the resultant. Many applications involve rep eatedly

icitize the multip olynomials by replacing the monomial

solving the same typ e of multip olynomial system. In this

co ecients with symb olic variables. For example, we

case, we can construct a resultant of generic p olynomials,

2 2

would transform the function 13x +7xy +6y +4x

instantiate the generic resultant for each set of parameters,

2y+9 into a X0Y0 + a X1Y0 * X + a X0Y1 * Y + a X1Y1

and then numerically solve the instantiated resultant. The

* X * Y + a X2Y0 * X * X + a X0Y2 * Y * Y. We use

MARS package supp orts this metho dology in two ways.

this genericitization technique b ecause the resultant matri-

First, the resultant constructions can handle unresolved

ces are functions of monomial co ecients.

variables, and second, MARS uses Maple's subs func-

Multip olynomial systems are often formulated in terms

tion to instantiate variables. Note that in this case, we need

of other symb olic variables not the monomial co ecients.

to use lower-level MARS functions mapleresultant,

For this reason, MARS includes a routine for converting

solveresultant rather than the top-level Matlab in-

from a system of multip olynomials de ned with resp ect to

terface bezout2, sparse2, ... .

one variable set into a system of multip olynomials with sym-

In Example 6.3, the multip olynomial system character-

b olic monomial co ecients; furthermore, this routine com-

izes two generic axis-aligned ellipses at unsp eci ed lo cations

putes and outputs the relationship b etween the original sym-

with unsp eci ed axis lengths. First we construct a generic

b olic variables and the new monomial co ecientvariables.

resultant using the Maple Bezout resultant formulation.

Then, we instantiate this generic resultant using Maple's

subs function. Next, we use the mapleresultant rou-

6 Usage

tine to convert the Maple resultant data structure to a

This section illustrated the use of MARS by a series of small- Matlab resultant data structure. Finally, the function

size examples. solveresultant numerically computes the ro ots. 249

Example 6.3 Constructing a generic resultant correspond- 7 Performance

ing to a generic multipolynomial system for intersecting two

el lipses, and then instantiating the resultant so that we

don't have to recompute the resultant every time.

Formulation Degree Symb olic Commun- Ro ot

ication Solver

time s time s time s

>> format long g

Sylvester 2 0.05 0.88 6.26

Bezout 2 0.22 0.82 0.11

 initialize the Maple functions and variables

u-resultant 2 1.38 2.2 0.05

>> maple'Asym:= expandsquarex-Ax/Ar x +

Sparse 2 7.97 1.1 6.59

squarey-Ay/Ary - 1;';

Bezout 3 0.16 1.48 0.17

>> maple'Bsym:= expandsquarex-Bx/Br x +

u-resultant 3 14.1 23.0 0.10

squarey-By/Bry - 1;';

Sparse 3 22.96 1.81 6.82

>> maple'fnarray:=array1 ..2 ,[As ym, Bsym ]; ';

>> maple'vararray:=array 1.. 1,[y ]; ';

Bezout 4 26.86 1.48 24.38

 construct the generic resultant

>> maple'ellipseResultant :=B ezou te val fna rray ,

Table 1: Running times seconds for constructing, converting,

evalvararray,x;';

and numerically solving systems of two, three and four p olynomial

 instantiate the generic resultant

equations using MARS on a Cyrix-686 166MHz PC-compatible

 Note that Maple resultant objects permanently

computer w/48Meg

 reside in the Maple kernel and that Matlab

 uses only their names to refer to them

>> maple'thisEllRes:=

Table 1 presents times for solving multip olynomial systems

map procx expandsubs{Ax=3,Ay=10, Arx =7,A ry= 4,

using MARS via three di erent formulations Sylvester,

Bx=8,By=-4,Brx=8,Bry=16}, x ;

Bezout, u-resultant, and Sparse. The times are broken

end, ellipseResultant;';

down into symb olic computation constructing the resul-

 convert the Maple resultant data structure

tant p olynomial , communication time  converting the

 into a Matlab resultant data structure using

resultant data structure from Mapleto Matlab, and for

 the Matlab function ``mapleresultant'' which

numerically computing ro ots via eigendecomp osition.

 takes as input the name of a Maple resultant

These preliminary timings are not meant as a comparison

 object

>> matlabEllipseRes=mapler esu ltan t' this Ell Res' 

of the di erent algorithms; instead, they serve as evi-

>> intersections=solveresu lta ntm atl abEl lip seRe s

dence for the diversity and functionality of MARSmore

intersections =

p erformance results app ear in [WEM98 ]. These measure-

9.24740034171016 11.8043022481223

ments were computed using the multip olynomial systems

1.77964742680292 6.06125516327268

f1x,y=x*x+6*x+3*y-4, and f2x,y=y*y+2*x-7*y+5 and

f1x,y,z=x*x+6*x+3*y+6 *z- 4, f2x,y,z=y*y+2*x-7*y+ 5+2* z,

6.4 Automatically Generating C Co de

and f3x,y,z =

x*x+y*y+z*z-1, b oth studied ab ove. Bezout4

One of MARS' key features is the ability to automati-

used the f1x,y,z,w=5*x*x+6*x+3 *y+6 *z+ 3+2* w,

cally generate C co de for the resultant matrix p olynomial

f2x,y,z,w=4*y*y+4*x+3 *z+ w*2+ 4,

and computing the eliminated variables from the eigenvec-

f3x,y,z,w=x+3*z-9+w*7 -11 *z*z , and

tors. The generated C co de interfaces with MARS' C li-

f4x,y,z,w=x+3*y+2*z-3 *w* w+13 . .

brary whichnumerically solves matrix p olynomials via the

eigendecomp osition approach. MARS' C library exp ects

8 Conclusion

the matrix p olynomial to be characterized by a three di-

mensional matrix where the rst index represents the term

Resultant matrices reduce p olynomial system solving in the

degree, and the second and third elements characterize

zero-dimensional case to a linear algebra problem, at the

the matrix indices. We implemented separate functions

heart of which lies an eigenvalue/eigenvector computation.

in Matlab and Maple for generating C co de: MARS

We have designed and implemented a Maple/Matlab/C

includes functions for generating C co de for instantiating

package of resultant-based metho ds for solving arbitrary sys-

the co ecients of resultant matrix p olynomials, namely

tems whose ro ots form a set of zero dimension. There are

resultantccode, and for generating C co de for extrac-

three main comp onents to our package, baptized MARS: a

tion recip es resultantextractrecipeccode.

the symb olic manipulation in Maple to construct a variety

Example 6.4 MARS includes routines for automatical ly

of di erent resultant matrices, b the eigendecomp osition

generating C code for instantiating the coecients of the

technique, coupled with techniques for improving precision,

resultant matrix

for numerically computing all common solutions in Mat-

laband, c C co de generation routines and a library of C

>> resultantccode'ellipse Res ulta nt' 

functions for incorp orating the numerical solver into real-

t0 = -2.0/Arx*Arx/Bry*Br y* Ax*A x*B y-

world engineering and scienti c applications.

2.0/Ary*Ary/Bry*Bry*A y*A y*By +

In its current preliminary state, MARS has proven,

2.0/Bry*Bry*By+2.0/Ary *Ar y/

through the examples of this pap er, its diversity and user- Brx*Brx*Bx*Bx*Ay+

2.0/Ary*Ary/Bry*Bry*B y*B y*Ay -2. 0/

friendliness, which make it suitable for educational purp oses

Ary*Ary*Ay;

and for exploring di erent approaches to system solving.

resultant[0][0][0]=t0;

Moreover, the p erformance of MARS is reasonable and cur-

t0 = 4.0/Arx*Arx/Bry*Bry *A x*By -4. 0/

rentwork in improving the implementation should lead to a

Ary*Ary/Brx*Brx*Bx*Ay ;

practical comp etitive system. Directions of further algorith-

...

mic work include the use of matrix structure for reducing 250

the time as well as space complexityof our metho ds, and [KM95] S. Krishnan and D. Mano cha. Numeric-symb olic algorithms

for evaluating one-dimensional algebraic sets. In Proc. ACM

the study of genericity conditions, as in e.g. [Gon91 ].

Intern. Symp. on Symbolic and Algebraic Computation,

pages 59{67, 1995.

Acknowledgments

[KS96] D. Kapur and T. Saxena. Sparsity considerations in Dixon

resultants. In Proc. ACM Symp. Theory of Computing,

I.E. was partially supp orted by Europ ean ESPRIT pro ject

pages 184{191, 1996.

FRISCO LTR 21.024 and acknowledges enlightening car com-

mutes with Bernard Mourrain. D.M. has b een supp orted byan

[Mac02] F.S. Macaulay On Some Formula in Elimination Proceedings

of London Mathematical Society, pages 3{27, May 1902.

Alfred P. Sloan Foundation Fellowship, ARO Contract DAAH04-

96-1-0257, NSF Grant CCR-9319957, NSF Career Award CCR-

[Man92] D. Mano cha. Algebraic and Numeric Techniques for Mod-

9625217, ONR Young Investigator Award N00014-97-1-0631

eling and Robotics. PhD thesis, Department of Electrical

Engineering and Computer Science, University of California,

and Intel. Work partially conducted while A.W. and I.E. were

Berkeley, 1992.

visiting D.M. at the Department of Computer Science of UNC,

Chap el Hill.

[Man94] D. Mano cha. Solving systems of p olynomial equations. IEEE

Comp. Graphics and Appl., Special Issue on Solid Model-

ing, pages 46{55, 1994.

References

[MC94] D. Mano cha and J.F. Canny. Ecientinverse kinematics

[AS88] W. Auzinger and H.J. Stetter, An elimination algorithm for

for general 6R manipulators. IEEE Trans. on Robotics and

the computation of all zeros of a system of multivariate p o-

Automation, 105:648{657, 1994.

lynomial equations, In Proc. Intern. Conf. on Numerical

Math., Intern. Series of Numerical Math., 86, pages 12{

[MD95] D. Mano cha and J. Demmel. Algorithms for intersecting

30, 1988. Birkhauser, Basel.

parametric and algebraic curves I I: Multiple intersections.

Graphical Models and Image Proc., 572:81{100, 1995.

[CE93] J. Canny and I. Emiris. An ecient algorithm for the sparse

mixed resultant. In G. Cohen, T. Mora, and O. Moreno,

[MM94] .D. McKelvey and A. McLennan. The maximal number

editors, Proc. Intern. Symp. on AppliedAlgebra, Algebraic

of regular totally mixed Nash equilibria. Technical Rep ort

Algor. and Error-Corr. Codes, Lect. Notes in Comp. Sci-

865, Div. of the Humanities and So cial Sciences, California

ence 263, pages 89{104, Puerto Rico, 1993. Springer.

Institute of Technology,Pasadena, Calif., July 1994.

[CGT97] R.M. Corless, P.M. Gianni, and B.M. Trager. A reordered

[Mou97] B. Mourrain. Solving p olynomial systems by matrix compu-

Schur metho d for zero-dimensional p olynomial

tations. Manuscript. INRIA Sophia-Antip olis, France. Sub-

systems with multiple ro ots. In Proc. ACM Intern. Symp.

mitted for publication, 1997.

on Symbolic and Algebraic Computation, pages 133{140,

1997.

[MP97] B. Mourrain and V.Y. Pan. Solving sp ecial p olynomial sys-

tems by using structured matrices and algebraic residues.

[CM96] J.-P. Cardinal and B. Mourrain. Algebraic approach of

In F. Cucker and M. Shub, editors, Proc. Workshop on

residues and applications. In J. Renegar, M. Shub, and

Foundations of Computational Mathematics, pages 287{

S. Smale, editors, The Mathematics of Numerical Analy-

304, Berlin, 1997. Springer.

sis,volume 32 of Lectures in Applied Math., pages 189{210.

AMS, 1996.

[MSW94] A.P. Morgan, A.J. Sommese, and C.W. Wampler. A

pro duct-decomp osition b ound for B ezout numb ers. SIAM

[CP93] J. Canny and P.Pedersen. An algorithm for the Newton re-

J. , 324, 1994.

sultant. Technical Rep ort 1394, Comp. Science Dept., Cor-

nell University, 1993.

[Reg95] A. Rege. A complete and practical algorithm for geometric

theorem proving. In Proc. ACM Symp. on Computational

[EC95] I.Z. Emiris and J.F. Canny. Ecient incremental algorithms

Geometry, pages 277{286, Vancouver, June 1995.

for the sparse resultant and the mixed volume. J. Symbolic

Computation, 202:117{149, August 1995.

[Ren92] J. Renegar. On the computational complexity of the

rst-order theory of the reals. J. Symbolic Computation,

[EM96] I.Z. Emiris and B. Mourrain. Polynomial system solving:

133:255{352, 1992.

The case of a 6-atom molecule. Technical Rep ort 3075, IN-

RIA Sophia-Antip olis, France, 1996.

[RR95] M. Raghavan and B. Roth. Solving p olynomial systems for

the kinematics analysis and synthesis of mechanisms and

[EM97] I.Z. Emiris and B. Mourrain. Matrices in elimination theory.

rob ot manipulators. Trans. ASME, Special 50th Annivers.

J. Symbolic Computation, Special Issue on Elimination,

Design Issue, 117:71{79, June 1995.

1997. Submitted.

[VVC94] J. Verschelde, P.Verlinden, and R. Co ols. Homotopies ex-

[Emi97] I.Z. Emiris. A general solver based on sparse resultants: Nu-

ploiting Newton p olytop es for solving sparse p olynomial sys-

merical issues and kinematic applications. Technical Rep ort

tems. SIAM J. Numerical Analysis, 313:915{930, 1994.

3110, INRIA Sophia-Antip olis, France, 1997.

[WC97] A. Wallack and J. Canny. Planning for mo dular and hybrid

[Fau95] J.-C. Faug ere. State of GB and tutorial. In Proc. PoSSo

xtures. Algorithmica, 19:40{60, 1997.

Polynomial System Solving Workshop on Software, pages

55{71, Paris, March 1995.

[WEM98] A. Wallack and I. Emiris and D. Mano cha MARS: A

Maple/Matlab/C Resultant-based Solver Technical Rep ort

[FRI97] FRISCO. First year rep ort, February 1997.

TR98-020, Dept. of Computer Science, University of North

http://extweb.nag.co.uk/projects/ FRISCO.html.

Carolina, Chap el Hill, April 1998.

[Gon91] L. Gonz alez-Vega. Determinantal formulae for the solution

[WM98] A. Wallack and D. Mano cha. Robust algorithms for ob ject

set of zero-dimensional ideals, J. Pure Applied Algebra,

lo calization. Intern. J. Comp. Vision, 273:243{262, 1998.

76:57{80, 1991.

[Hof89] C.M. Ho mann, Geometric and Solid Modeling, Morgan

Kaufmann, San Mateo, California, 1989.

[KL92] D. Kapur and Y.N. Lakshman. Elimination metho ds: An

intro duction. In B. Donald, D. Kapur, and J. Mundy, ed-

itors, Symbolic and Numerical Computation for Arti cial

Intel ligence, pages 45{88. Academic Press, 1992. 251