<<

Robust Algorithms for Ob ject Lo calizati on

y 

Dinesh Mano cha Aaron S. Wallack

Department of Computer Science Computer Science Division

University of North Carolina University of California

Chap el Hill, NC 27599-3175 Berkeley, CA 94720

mano [email protected] wallack@rob otics.eecs.b erkeley.edu

Abstract

Ob ject lo calization using sensed data features and corresp onding mo del features is a fun-

damental problem in machine vision. We reformulate ob ject lo calization as a least squares

problem: the optimal p ose estimate minimizes the squared error discrepancy b etween the

sensed and predicted data. The resulting problem is non-linear and previous attempts to

estimate the optimal p ose using lo cal metho ds such as gradient descent su er from lo cal

minima and, at times, return incorrect results. In this pap er, we describ e an exact, accurate

and ecient algorithm based on , linear , and numerical analysis, for solv-

ing the nonlinear least squares problem asso ciated with lo calizing two-dimensional ob jects

given two-dimensional data. This work is aimed at tasks where the sensor features and the

mo del features are of di erenttyp es and where either the sensor features or mo del features

are p oints. It is applicable to lo calizing mo deled ob jects from image data, and estimates the

p ose using all of the pixels in the detected edges. The algorithm's running time dep ends

mainly on the typ e of non-p oint features, and it also dep ends to a small extent on the number

of features. On a SPARC 10, the algorithm takes a few microseconds for rectilinear features,

a few milliseconds for linear features, and a few seconds for circular features.



Supp orted byFannie and John Hertz Fellowship

y

Supp orted in part by Junior FacultyAward, University ResearchAward, NSF Grant CCR-9319957 and

ARPA Contract DABT63-93-C-0048. 1

2

1 Intro duction

Mo del-based recognition and lo calization are fundamental tasks in analyzing and under-

standing two-dimensional and three-dimensional scenes [12 , 10 ]. There are many applications

for vision systems: insp ection and quality control, mobile rob ot navigation, monitoring and

surveillance. The input to such systems typically consists of two-dimensional image data or

range data. The mo del-based recognition problem is to determine which ob ject O from a set

of candidate ob jects fO ;O ;:::g b est explains the input data. Since the ob ject O can have

1 2

any orientation or lo cation, the lo calization problem is to compute the p ose p for a given

ob ject O which b est ts the sensed data.

The problems of recognition and lo calization have b een extensively studied in the vision

literature [12, 10 ]. The ma jority of the ob ject recognition algorithms utilize edge-based

data, where edges are extracted from image data using lter-based metho ds. Edge data is

either comprised of pixels whichhave exceeded some threshold, or edge segments formed by

straight line approximations of those pixels. On the other hand, ob jects are usually mo deled

as a set of non-p oint features, such as line segments, p olygons, or parameterized curves.

For this reason, lo calization algorithms must handle matching sensor features and mo del

features which are of di erenttyp es. We present an exact and robust mo del-based ob ject

lo calization algorithm for features of di erenttyp es, where either the sensed features or the

mo del features are p oints refer Figure 1. Unlike Kalman ltering based approaches to the

same problem, this algorithm is immune to problems of lo cal minima, and unlike approaches

which reduce the problem to matching p oint features which can b e solved exactly [16, 26 , 5],

this algorithm do es not intro duce error by sampling p oints from mo del features.

Object Edge Detected

Pixels

Figure 1: Given image data, this lo calization algorithm estimates the ob ject's p ose directly

from the edge detected pixels and the mo del features.

Our approachinvolves reducing the mo del-based ob ject lo calization problem b etween dis-

similar features to a nonlinear least squares error problem, where the error of a p ose is

3

de ned to b e the discrepancy b etween the predicted sensor readings and the actual sensor

readings. In this pap er, we concentrate on the ob ject lo calization problem, i.e.,we assume

that the features have already b een interpreted using a corresp ondence algorithm. We use

global metho ds to solve the problem of optimal p ose estimation. In particular these metho ds

compute the p ose that minimizes the sum of the squared errors b etween the sensed data

and the predicted data. Rather than using lo cal metho ds such as gradient descent, the

problem is formulated algebraically and all of the lo cal extrema of the sum of squared error

, including the global minimum, are computed. For the case of matching p oints to

circular features, we use an approximation of the squared error b ecause the error cannot b e

formulated algebraically without intro ducing an additional variable for each circular feature.

The algorithm's running time mainly dep ends up on the algebraic complexity of the result-

ing system, which is a function the typ e of non-p oint features. Since the algorithm takes all

of the sensor features and mo del features into account, the running time is linear in the size

of the data set. It has b een implemented and works well in practice. In particular, ob jects

with rectilinear b oundaries are lo calized in microseconds onaSPARC10workstation. The

running time increases with the complexity of the non-p oint features: the algorithm takes

a few tens of mil liseconds to lo calize p olygonal mo dels, and one to two seconds to lo calize

mo dels with circular arcs.

1.1 Previous Work

The lo calization techniques presented in this pap er are applicable to the general eld of

mo del-based machine vision. In machine vision applications, lo calization is usually reformu-

lated as a least squares error problem. Some metho ds have fo cused on interesting sensor

features suchasvertices, and ignored the remainder of the image data [11 , 5 , 3 , 14 , 6, 7],

thereby reducing the problem to a problem of matching similar features. These approaches

are likely to b e sub optimal b ecause they utilize a subset of the data [23]. In order to utilize

all of the edge data, mo deled ob jects must b e compared directly to edge pixel data. In

the mo del-based machine vision literature, there are three main approaches for lo calizing

ob jects where the mo del features and the sensed features are of di erenttyp es: sampling

p oints from the non-p oint features to reduce it to the problem of lo calizing p oint sets; con-

structing virtual features from the p oint feature data in order to reduce it to the problem of

matching like features; estimating the ob ject's p ose by computing the error function b etween

the dissimilar feature typ es and nding the minimum using gradient descent techniques, or

4

Kalman ltering techniques.

One approach for ob ject lo calization involves sampling p oints from non-p oint features in

order to reduce it to the problem of matching p oints to p oints. The main advantage of

reducing the problem of matching p oints to non-p oint features to the problem of matching

p oint sets is that the problem is exactly solvable in constant time. For two-dimensional

rigid transformations, there are only three degrees of freedom: x; y ; [16, 26]. By decoupling

these degrees of freedom, the problems b ecome linear, and are exactly solvable in

constant time. The crucial observation was that the relative p osition, x; y , of the p oint

sets could b e determined exactly irresp ective of the orientation, since the average of the x-

co ordinates and y -co ordinates was invariant to rotation. This decoupling technique is valid

for three dimensions as well [5]. To reduce the problem of matching p oints to non-p oint

features to the solvable problem of matching p oints to p oints, p oints are sampled from the

non-p oint features. This approach is optimal when ob jects are mo deled as p oint-sets, but

can b e sub optimal if ob jects are mo deled as non-p oint features, b ecause sampling intro duces

errors when the sampled mo del p oints are out of phase with the actual sensed p oints.

Another approach to ob ject lo calization for dissimilar features involves constructing vir-

tual features from sensed feature data and then computing the ob ject's p ose by matching

the like mo del features and virtual features. Such approaches solve an arti cial problem,

that of minimizing the error b etween virtual features and mo del features rather than mini-

mizing the error b etween the sensed data and the mo deled ob jects. Faugeras and Heb ert [5]

presented an algorithm for lo calizing three-dimensional ob jects using p ointwise p ositional in-

formation. In their approach, virtual features were constructed from sensed data and then,

ob jects were identi ed and lo calized from the features, rather than the sensed data. Forsyth

et al. describ ed a three-dimensional lo calization technique which represented the data by

parameterized conic sections [7]. Ob jects were recognized and lo calized using the algebraic

parameters characterizing the conic sections. Recently,Ponce, Ho ogs, and Kriegman have

presented algorithms to compute the p ose of curved three-dimensional ob jects [21, 22]. They

relate image observables to mo dels, reformulate the problem geometrically, and utilize al-

gebraic elimination theory to solve a least squares error problem. One drawback of their

approach is that they minimize the resultant equation, not the actual error function. Ponce

et al. use the Levenb erg-Marquardt algorithm for the least square solution, whichmay return

a lo cal minimum [21].

5

Another approach to ob ject lo calization involves reducing it to the correct least squares

problem, but then solving that least squares problem using gradient descent techniques.

The least squares approachinvolves formulating an equation to b e minimized; consequently,

the error function b etween a p oint and a feature segment cannot takeinto account fea-

ture b oundaries. Gradient descent metho ds require an initial search p oint and su er from

problems of lo cal minima. Huttenlo cher and Ullman used gradient descent techniques to

improve the p ose estimates develop ed from comparing sets of three mo del p oint features and

three image p oint features [14]. Many lo calization algorithms have relied on Kalman lters

to match p oints to non-p oint features. Kalman lters su er from lo cal minima problems as

well. Given a set of p oints and corresp onding non-p oint features, Ayache and Faugeras [3] at-

tempted to compute the least squares error p ose by iteration and by Kalman lters. Kalman

lter-based approaches have demonstrated success in various machine-vision minimization

applications [17, 28 , 4 ]. Kalman lter-based approaches su er from two problems: sensitivity

to lo cal minima problems, and the requirementofmultiple data p oints.

A great deal of work has b een done on matching features p oints to p oints, lines to

lines, p oints to features etc. in the context of interpreting structurefrom motion [6, 13 ].

The problem involves matching sensor features from two distinct two-dimensional views of

an ob ject; the problem of motion computation is reduced to solving systems of nonlinear

algebraic equations. In practice, the resulting system of equations is overconstrained and the

co ecients of the equations are inaccurate due to noise. As a result, the optimal solution

is usually found using gradient descent metho ds on related least squares problems. Most of

the lo cal metho ds known in the literature do not work well in practice [15] and an algorithm

based on global minimization is greatly desired.

Sparse sensors have recently demonstrated success in recognition and lo calization applica-

tions [27 , 20]. The need for accurate lo calization algorithms has arisen with the emergence

of these sparse probing sensor techniques. This technique has b een applied to recognition

and lo calization using b eam sensor probe data and results in p ose estimation with p ositional

1

o

accuracy of " and orientational accuracy of 0:3 [27 , ?].

1000

The algorithm is applicable to b oth sparse and dense sensing paradigms; in sparse sensing

applications, such as probing, a small set of data is collected in each exp eriment, whereas in

dense sensing applications, suchasmachine vision, an inordinate amount of data is collected

in each exp eriment. Sparse sensing applications necessitated development of this lo calization

6

algorithm b ecause sparse sensing strategies require an exact optimal p ose strategy. First, a

small data set cannot accommo date sampling p oints from the data set in order to transform

the problem of matching features of dissimilar typ es to matching features of similar typ es,

and second, a small data set is more susceptible to problems of lo cal minima which can arise

with lo cal metho ds such as gradient descent or Kalman ltering.

1.2 Organization

The rest of the pap er is organized in the following manner. In Section 2, we review

the algebraic metho ds used in this pap er and explain how probabilistic analysis provides a

basis for using a least square error mo del to solve the lo calization problem. We describ e

the lo calization problem and showhow algebraic metho ds can b e applied for optimal p ose

determination in Section 3. The algorithm for lo calizing p olygonal ob jects given p oint data

is describ ed in Section 4 and the algorithm for lo calizing generalized p olygonal ob jects, i.e.,

ob jects with linear and circular features is describ ed in Section 5. In Section 6, we present

various examples, and use the lo calization technique to compare the relative lo calization

accuracy of utilizing all available data compared to using only an interesting subset of the

data. Finally,we conclude by highlighting the contributions of this work.

2 Background

2.1 Error Mo del

In this section, we describ e the error mo del used for lo calization. Probability theory

provides the basis or reducing the ob ject lo calization to a least squares error problem [25].

Assuming imp erfect sensing and imp erfect mo dels, our goal is to tolerate these discrepancies

and provide the b est estimate of the ob ject's p ose. Real data presents constraints which

are inconsistent with p erfect mo dels, and for this reason, we reformulate the problem into

a related solvable problem; minimizing the squared error is a common metho d for solving

problems with inconsistent constraints, and it provides to the maximum likeliho o d estimate,

assuming that the errors noise approximate indep endent normal Gaussian distributions.

The reason we assume a normal Gaussian error distribution is the comp osition of error

from many di erent sources. Even though wemay b e using high precision, high accuracy

sensors to prob e a two-dimensional ob ject, such sensors have manyintrinsic sources of error.

Furthermore, real ob jects can never b e p erfectly mo deled, and this accounts for an additional

source of error. These error distributions de ne a conditional probability distribution on

7

the sensed data values given the b oundary p osition which dep ends on the ob ject's p ose,

equation 1. The conditional probability can b e rewritten in terms of the sum of the

squared discrepancies equation 2. The conditional probability of the p ose given the

sensor data is related to the conditional probability of the sensor data given the p ose by

Bayes' theorem equations 3,4.

The minimum sum squared error p ose corresp onds to the maximum probability maximum

likeliho o d estimate, since minimizing the negated exp onent maximizes the result of the

whole expression. Thereby, computing the least squares error p ose is analogous to p erforming

maximum likeliho o d estimation.

2 2

D iscr epancy s ;s  D iscr epancy s ;s 

1 2

1;pr edicted 2;pr edicted

:::

2 2

2 2

Prsensor = S jpose = / e 1

P

2

D iscr epancy s ;s 

i

i;pr edicted

i

2

2

Prsensor = S jpose = / e 2

Prsensor = S jpose =  Prpose =

3 Prpose =jsensor = S  =

Prsensor = S 

P

2

D iscr epancy s ;s 

i

i;pr edicted

i Prpose =

2

2

Prpose =jsensor = S  / e

Prsensor = S 

P

2

D iscr epancy s ;s 

i i;pr edicted

i 1

2

2

Prpose =jsensor = S  / e 4

2

2.2 Directly Comparing Features

The sensor data p oint features should b e directly compared to the mo del features, even

when the sensor features and mo del features are of di erenttyp es. One measure of discrep-

ancy b etween p oints and nonp oint features is the Euclidean distance b etween a sensor data

p oint and a sp eci c predicted p oint on the mo del feature. Measuring the discrepancy in this

way relies on the mistaken assumption that the sensed p oint feature corresp onds to a known

p oint on the mo del feature. The corresp ondence information sp eci es a mo del feature relat-

ing to each sensor feature p oint, it do es not indicate particular p oints on the mo del features.

A more reasonable measure of the discrepancy is the minimum distance b etween the sensor

feature p oint and the corresp onding mo del feature, such as the minimum distance b etween a

p oint and a line, or the minimum distance b etween a p oint and a circle. For machine vision

applications, measuring the error in this way allows one to utilize all of the edge detected

pixels without intro ducing any arti cial error by sampling the mo del features, for example.

It turns out that the discrepancy error, or minimum distance, b etween a sensor data p oint and a linear mo del feature or a circular arc mo del is a nonlinear computation. In particular,

8

the minimum distance b etween a p oint and a circular arc cannot b e formulated exactly with-

out intro ducing an additional variable; in order to achieve constant time p erformance, we

use a approximation to that minimum distance function which do es not require intro duction

of an additional variable.

2.3 Solving Algebraic Systems

The key to this algorithm is that we reduce the problem of nding the global minimum

of an algebraic function to solving a system of nonlinear algebraic equations. Finding the

global minimum of a function is a classic problem and there many algorithms known in the

literature. The algorithms can b e categorized into lo cal metho ds and global metho ds. Lo cal

techniques such as Newton's metho d require go o d initial estimates in order to nd all the

solutions in the given domain, and this is dicult in our applications. Global metho ds such

as resultants and Grobner bases are based on symb olic reduction and computing ro ots of

univariate p olynomials. They are rather slow in practice and in the context of oating p oint

computations su er from numerical problems. Other global techniques include homotopy

metho ds. However, they are rather slow for practical applications [13 ]. In this pap er, we

use some recently develop ed algorithms based on multip olynomial resultants and

computations describ ed by Mano cha [19]. Resultants are well known in classical algebraic

geometry literature [24, 18] and have recently b een e ectively used in many problems in rob otics, computer graphics and geometric mo deling [19].

Sensor Frame 1 0 Sensor Frame 013 0 1 2 Model Frame 1 0 3 2 2 3 2 Model Frame 3 i i Sensed data point corresponding to feature i Sensed data point corresponding to feature i i

Feature i corresponding to sensed data point i i Feature i corresponding to sensed data point i

a b

Figure 2: a: Given a set of sensed data p oints in the sensor reference frame, and non-p oint

mo del features in a mo del reference frame, the task is to determine the mo del p osition in the

sensor reference frame. b: The mo del's p osition can b e expressed as the transformation

between the sensor frame and the mo del frame.

9

The algorithm for solving non-linear equations linearizes the problem using resultants

and uses matrix computations such as eigendecomp osition, singular value decomp osition

and Gaussian elimination [9]. Routines for these matrix computations are available as part

of standard numerical linear algebra libraries. In particular, we used LAPACK [2] and

EISPACK [8] routines in our implementation of the algorithm.

2.4 Problem Sp eci cation

Given a set of p oint features in one frame of reference and non-p oint features de ned

in another frame of reference, we determine the transformation which optimally maps the

features onto each other. Assume without loss of generality that sensor features are data

p oints in the sensor reference frame and corresp ond to non-p oint mo del features in the

mo del frame. The task is to lo calize the mo del ob ject in the sensor reference frame; i.e.,

to determine the transformation b etween the sensor frame and the mo del frame consistent

with the data see Figure 2a. Figure 2b shows the most likely p ose of a particular

mo del/ob ject for the p oints shown in Figure 2a.

There are two approaches we can take: transform the non-p oint mo del features, and then

match them to p oints, or transform the p oints, and then match them to the non-p oint mo del

features. Wehavechosen to transform the p oints b ecause it enables us to use a uniform

approach to solving the mathematical problems corresp onding to di erent feature typ es.

Due to the duality of these two related transformations, the algorithm for solving for the

optimal transformation of the p oints can also b e used to solve for the optimal transformation

of the non-p oint mo del features by computing the inverse transformation.

3 Theoretical Framework

3.1 Transformation Matrices

Two-dimensional rigid transformations can b e parameterized by X; Y ; refer equation 5.

: In order to arrive at algebraic equations, we utilize the trigonometric substitution t = tan 

2

2

2t 1t

sin = and cos = . The algebraic matrix MatX; Y ; t, which is used throughout

2 2

1+t 1+t

this pap er, is given in equation 5. It is imp ortant to rememb er that MatX; Y ; tisnot

10

2

exactly T X; Y ;  but rather 1 + t T X; Y ; .

2 3 2 3

2 2

1 t 2t X 1 + t  cos   sin  X

6 7 6 7

 t=tan

6 7 6 7

2

2 2

= 6 7 T X; Y ; =6 7  MatX; Y ; t

2t 1 t Y 1 + t  sin  cos  Y

4 5 4 5

2

0 0 1+t 0 0 1

5

3.2 Determining Lo cal Extrema

One way to extract the global minimum of a function is to compute all of the lo cal extrema

~

by solving for ~x in rF x ;x ;:::;x =0; 0;:::;0  and nd the minimum function value

1 2 n

at all the lo cal extrema. This approach succeeds in the non-degenerate case, when the global

minimum may corresp ond to one or more p oints a zero dimensional set, and when there

are a nite numb er of lo cal extrema, such as in the case of ob ject lo calization. We utilize

this basic approach in order to nd the minimum squared error p ose. Let E rrorX; Y ; t

describ e the total squared error as a function of the p ose parameters. First, we nd all the

~

lo cal extremum of E rrorX; Y ; tby solving rE rrorX; Y ; t=0; 0; 0. Then, the global

minimum is found by comparing the error function at all of the lo cal extrema p oses.

Previous attempts to compute the global minimum of the error function using gradient

descent metho ds su er from problems of lo cal minima b ecause the partial of the

error function are nonlinear functions. In this algorithm, we utilize elimination techniques to

solve for all of the ro ots of the multivariate system of equations and thus avoid the problems

of lo cal metho ds.

3.3 Symb olic Representation

Elimination theory deals with solving for co ordinates of simultaneous solutions of multi-

variable systems [18]. Resultant metho ds are used for pro jection and elimination. We use

resultants to solve for one particular co ordinate at a time; these metho ds involve construct-

ing a resultant equation in each co ordinate, where the resultant construction dep ends on the

structure of the underlying system of equations.

The resulting computation involves a considerable amount of symb olic prepro cessing. As

opp osed to constructing the resultant formulation for each data set, we use a generic formu-

lation. The algebraic formulation of the squared error b etween the transformed p oints and

the lines allows us to use the as a generic representation for the squared

error. The total error function b etween p oints and linear features can b e expressed as a

11

rational p olynomial, whose numerator consists of 24 terms. Likewise, the alge-

braic formulation of the squared approximate error b etween transformed p oints and circles

allows us to use the algebraic equation as a generic representation for the squared error. The

approximate total error function b etween p oints and linear features and circular features has

a symb olic representation as a rational p olynomial with a numerator consisting of 50 mono-

mial terms. The algebraic formulation of the total error functions serves as the intermediate

description. After reparameterizing the data sets in terms of algebraic co ecients, only one

resultant formulation is necessary.

For each set of p oints and features, the sum of squared error b etween the set of p oints and

corresp onding features is describ ed by a rational equation in terms of the p ose parameters

X; Y ; t. The numerator of this rational equation is describable by the algebraic co ecients

i j k

X Y t , where our nomenclature is such that ax y t refers to the co ecient for the mono-

i j k

i j k

mial X Y t .

The multip olynomial resultant is used to eliminate variables from the system of partial

equations, and thus to pro duce a function in one variable. This univariate function

is necessarily zero for all values of that variable which are common ro ots to the system of

equations. In the context of this application, the orientation parameter t is the most dicult

p ose parameter to determine. The rst step in b oth of these techniques involves eliminating

X; Y to pro duce a resultant p olynomial in t for which the partial derivatives of the error

function are simultaneously zero. For each such t,we can then compute the two remaining

p ose parameters X and Y . In the case of p olygonal mo dels, X and Y are determined by

t t t t

solving a system of two linear equations in X and Y .For mo dels with circular arc features,

t t

computing X and Y involves setting up another resultant to compute Y by eliminating

Y ;t t t

t

X , and then computing X .

t Y ;t

t

4 Optimal Pose Determination for Mo dels With Linear Features

In this section, we describ e the algebraic lo calization technique for p olygonal mo dels given

the b oundary data p oints. We de ne the problem as follows: compute the two-dimensional

rigid transformation T X; Y ;  which b est transforms the linear mo del features ~a onto the

data p oints ~x refer section 2.4. This is accomplished by nding the two-dimensional rigid

transformation T X; Y ;  which b est transforms the data p oints ~x onto the linear mo del

features ~a and then compute the inverse transformation. Pose estimation is reduced to a

12

least squares error problem, which is accomplished by computing all of the lo cal extrema of

the error function, and then checking the error function at all of the lo cal extrema.

The main distinction b etween our approach and the related work in the literature lies in

our use of algebraic elimination theory to solve the simultaneous system of partial derivative

equations. We b elieve that gradient descent metho ds are unsuitable for solving this system

of equations. Algebraic elimination metho ds are used for the principal step of solving the

system of equations to enumerate the p oses with lo cal extrema of the error function.

The rest of the section is organized in the following manner. We rst presentanoverview

of the algorithm. Then, we present an algebraic formulation of the error b etween an arbitrary

transformed p oint and an arbitrary line, followed by a generic description of the total error

function, in terms of symb olic algebraic co ecients. Then, we detail the symb olic resultant

matrix for solving for one of the p ose parameters t of the generic error function.

The symb olic co ecients are recomputed each time the algorithm is run. In section 4.3,

we describ e numerical metho ds for computing all the orientations t corresp onding to the

lo cal extrema p oses. In section 4.4, we describ e how the remaining p ose parameters X ;Y

t t

are computed for each orientation t.

Throughout this section, we will try to elucidate the main concepts using a simple ex-

ample with four p oints corresp onding to three lines. We will graphically depict the error

function b etween each p oint and each corresp onding linear feature, and we will depict the

sum squared error function b etween the p oints and linear features. Unfortunately though,

since the problem has three degrees of freedom, x; y ; , and wewant to use an additional

degree of freedom to describ e the error function, we cannot graphically depict the entire

problem completely since we do not want to confuse the situation by trying to describ e a

four dimensional scenario. For these reasons, the graphics depict the three dimensional slice

along the hyp erplane x =0.

We run through an example in section 4.5, and in section 4.7 we present a data set for which

gradient descent metho ds may fail to nd the optimal p ose estimate. Finally,we presenta

faster lo calization technique for rectilinear ob jects which exploits a symb olic factoring of the

resultant matrix.

4.1 Algorithmic Overview

Algebraic metho ds are used to compute the orientation, , of p olygonal ob jects. This

involves online and oine computations: the symb olic resultant matrix for is constructed

13

from the symb olic partial derivatives oine; online, the symb olic co ecients are computed

given the data set, thereby constructing the symb olic resultant matrix. Then, the resultant

matrix is solved to enumerate all of the orientations, , of p oses with lo cal extrema of the error

function. Then, the remaining p ose parameters X; Y  are determined for each orientation

. Finally, the global minimum is found by comparing the error of all of the lo cal extrema

p oses.

The lo calization algorithm involves four oine steps:

1. Determine the structure of a generic error function by examining an algebraic ex-

pression of the error b etween an arbitrary transformed p oint and an arbitrary linear

feature.

2. Express the error function E rrorX; Y ;  as a generic algebraic function in terms of

symb olic co ecients.

3. Formulate the partial derivatives of the generic error function E rrorX; Y ;  with

resp ect to the co ordinates: X , Y , and . The motivation is that each zero-dimensional

~

p ose with lo cal extremum error satis es rE rrorX; Y ; =0.

~

4. Eliminate X and Y from the system of partial derivatives rE rrorX; Y ; =0

in order to generate a expression solely in . The result of this step is a symb olic

resultant matrix which will b e used to solve for all of the of p oses with lo cal extrema

error function. Given the orientation, , the remaining p ose parameters X , Y can b e

determined by solving a system of linear equations. .

In addition to the oine steps, the lo calization algorithm involves three online steps given

a data set:

1. Instantiate the symb olic co ecients of the error function using the data set of p oints

and asso ciated linear features.

2. Compute all of the candidate orientation parameters by solving the resultant matrix

p olynomial using eigenvalue techniques refer section 4.3

^ ^ ^

3. 8 ,if is a candidate orientation Im   0 where Imx refers to the imaginary

comp onent of a x:

14

a Compute the remaining p ose parameters X ;Y for each by solving the linear

@ E rrorX;Y;  @ E rrorX;Y; 

system of equations j j =0, =0

^ ^

@X @Y

^

b Compute E rrorX ;Y ;  at each lo cal extremum in order to nd the global

^ ^

minimum error p ose.

4.2 Points and Linear Features

In this section we highlight the squared error function b etween p oints and linear features.

Given a p ointX,Y and an asso ciated linear feature describ ed by ~a =a; b; c where aX +

bY = c, the error b etween the p oint and the feature is: aX+ bY-c,ora; b; c  X; Y; 1.

T

Let ~a be a; b; c and ~x be X; Y; 1. The squared error b etween the p oint ~x and the

feature ~a, as a function of the transformation T X; Y ;  is given in equation 6. The total

squared error for all of the p oints and asso ciated features is given in equation 7. In order

 to arriveat to realize rational functions, we use the trigonometric substitution t = tan

2

equation 7.

To circumvent the problems related to numerical inaccuracy, the p oints and features in

the data set should b e centered at the origin b efore invoking the algorithm, and the opti-

mal transformation should b e complementarily transformed afterwards. This reduces the

symb olic co ecients and thus improves the numerical precision.

T T 2

ka; b; c  MatX; Y ; tX; Y; 1 k

T T 2

ka; b; c  T X; Y ; X; Y; 1 k = 6

2 2

1 + t 

T T 2

k~a  MatX; Y ; t~x k

T T 2

E rrorX; Y ; ; ~x; ~a=k~a  T X; Y ; ~x k =

2 2

1 + t 

X

E rrorX; Y ; =E rrorX; Y ; t= E rrorX; Y ; ; ~x ;~a =

i i

i

P

T T

2

X

ka~  MatX; Y ; tx~ k

i i

i

T T

2

7 ka~  T X; Y ; x~ k =

i i

2 2

1 + t 

i

For example, consider the case of the four p oints and corresp onding three lines depicted

in Figure 3. One p ose which exactly maps the four p oints onto their corresp onding features

p p

1 9 1 3 1 3 1 7

; ;  ; ; 4 + ; ; 4 ; , and the is given in Figure 4. The four p oints are: 

2 2 2 2 2 2 2 2

p p

1 3 3 1

three asso ciated lines are x +1=0; x y 1=0; x + y 1 = 0; the rst two p oints

2 2 2 2

b oth corresp ond to the rst line.

In order to illustrate the concept of an error function, consider the p ose =0;y =0

refer Figure 5. In this case, the error b etween the rst p oint and rst corresp onding linear 15

C. x/2 + y 3/2 − 1 = 0 Y Y C.(4− 3/2,1/2) B.(4+ 3/2,1/2)

x+1=0 X X A.

A.(7/2,−1/2) A.(9/2,−1/2)

B. x/2 − y 3/2 − 1 = 0

Linear Features Corresponding Points Figure 3: The simple example used to illustrate the main concepts

X=0;Y=4;t =−1 Y

C. x/2 + y 3/2 − 1 = 0 Y A. x+1=0 C.(4− 3/2,1/2)=>(1/2, 3/2) A.(7/2,−1/2)=>(−1,1/2) X A.(9/2,−1/2)=>(−1,−1/2) B.(4+ 3/2,1/2)=>(1/2,− 3/2)

B. x/2 − y 3/2 − 1 = 0

Figure 4: One solution transformation which maps the four p oints onto the corresp onding

linear features.

11 9

, and the error of the second p ointis , and 1 for the b oth the third and fourth feature is

2 2

p oints.

To further illustrate the concept of the error function, in Figure 6, we depict the squared

error function b etween the transformed p oints and the asso ciated linear features as functions

of y and t. The total error function E rrorY; , which is the sum of the four individual

squared error functions is given in Figure 7. The task, whichwe use resultant techniques

to accomplish, is to eciently nd the global minimum of this error function; notice that

3

the global minimum of the error function is zero at = , Y = 4, which corresp onds to

2

mapping the vertices directly onto the corresp onding linear features.

The sum of squared error b etween a set of p oints and corresp onding lines is an algebraic

2 2

function of the translation and rotation matrix MatX; Y ; tmultiplied by1+t  . The 16

Y Y

x+1=0 X x+1=0 X A. A. 9/2 11/2 A.(7/2,−1/2) A.(9/2,−1/2) Point 1 Point 2

C. x/2 + y 3/2 − 1 = 0 Y 1 C.(4− 3/2,1/2) X B.(4+ 3/2,1/2) 1

B. x/2 − y 3/2 − 1 = 0 Point 3 Point 4

Figure 5: The distance b etween the p oints and the lines following the identity transformation

X = Y = t =0.

co ecients ax y t are computable using the data set of p oints and asso ciated linear features.

i j k

2 2

The total sum squared error function equation 7 is multiplied by1+t  to arriveata

p olynomial FFX; Y ; tin X; Y and t equations 8,9.

For each p oint ~x and asso ciated linear feature ~a, the function FFX ; Y ; t; ~x;~a is a p olyno-

mial expression in X; Y and t. The total function FFX; Y ; t is the sum of all the individual

FFX ; Y ; t; ~x;~a error functions, and we can keep a running tally of all of the the symb olic

co ecients a, ax1, ax1y1, :::.

FFX; Y ; t

E rrorX; Y ; t= 8

2 2

1 + t 

X

T T

2 2

FFX; Y ; t= ka~  MatX; Y ; tx~ k = a + X ax1 + XY ax1y1 + X ax2 + Y ay1 + 9

i i

i

2 2 2 2

Y ay2 + tat1 + Xtax1t1 + Ytay1t1 + t at2 + Xt ax1t2 + XY t ax1y1t2 +

2 2 2 2 2 3 3 3

X t ax2t2 + Yt ay1t2 + Y t ay2t2 + t at3 + Xt ax1t3 + Yt ay1t3 +

4 4 4 2 4 4 2 4

t at4 + Xt ax1t4 + XY t ax1y1t4 + X t ax2t4 + Yt ay1t4 + Y t ay2t4

17

1 9 1 7

; ;~a =1; 0; 1 ~x = ; ;~a =1; 0; 1 ~x =

1 2 2 1

2 2 2 2

p p p p

1 1 1 1 3 3 3 3

; ;~a = ; ; 1 ~x =4 ; ;~a = ; ; 1 ~x =4+

3 4 4 3

2 2 2 2 2 2 2 2

Figure 6: The three dimensional plot of the error function E rrorY; ; ~x ;~a  for eachofthe

i i

four p oints and corresp onding linear features.

Figure 7: The total error function E rrorY; . The task, whichwe use resultant techniques

to accomplish, is to eciently nd the global minimum of this error function.

18

In the case of p olygonal ob jects, some of the algebraic co ecients are redundant b ecause

the generic algebraic expression includes co ecients which are dep endent. We note these

redundancies b ecause they can b e exploited in the resultant construction.

ax1y1t2 =2 ax1y1 ay2t2 =2ay2 ax2t2 =2 ax2 ax2t4 = ax2

ax1y1t4 = ax1y1 ay2t4 = ay2 ax1t3 = ax1t1 ay1t3 = ay1t1

All of the minimaof E rrorX; Y ; t, b oth lo cal and global, have partial derivatives with re-

sp ect to X; Y ; t equal to zero. To solve for the global minimum of E rrorX; Y ; t, wewantto

nd the con gurations X; Y ; t for which the partial derivatives are zero refer equation 10.

~ ~

0; 0; 0 rE rrorX; Y ; t=0; 0; 0 10 rE rrorX; Y ; =

Resultant techniques require algebraic not rational functions. Therefore, we replace the

rational partial derivatives with equivalent algebraic counterparts. Since we are only nding

the common ro ots of equation 10, we are at lib erty to use any function G X; Y ;  for

X

@ E rrorX;Y;t @ E rrorX;Y;t @ E rrorX;Y;t

where =0  G X; Y ;  = 0 and G for and G for

X y t

@X @X @Y

@ E rrorX;Y;t

.

@t

@ E rrorX;Y;t @ E rrorX;Y;t

For and , G X; Y ; t and G X; Y ; t are equal to the partial

X Y

@X @Y

@F F X;Y;t @F F X;Y;t

, . G X; Y ; t is slightly more complicated b ecause the de- derivatives

t

@X @Y

@ E rrorX;Y;t

nominator of is a function of t, and that must b e taken into account.

@t

@ E rrorX; Y ; t @F F X; Y ; t

/

@X @X

@F F X; Y ; t

 G X; Y ; t =

X

@X

2 4

= 2X ax2 +4X ax2t +2X ax2t 11

2 4

+Y ax1y1 +2Y ax1y1t + Y ax1y1t

2 3 4

+ax1 + ax1t1t + ax1t2t + ax1t3t + ax1t4t

@ E rrorX; Y ; t @F F X; Y ; t

/

@Y @Y

@F F X; Y ; t

 G X; Y ; t =

Y

@Y

2 4

= X ax1y1 +2X ax1y1t + X ax1y1t

2 4

+2Y ay2t 2Y ay2 +4Y ay2t

2 3 4

ay1 + ay1t1t + ay1t2t + ay1t3t ++ay1t4t 12

FFX;Y;t

@F F X;Y;t

2

@

2 2

1 + t  4tF F X; Y ; t

@ E rrorX; Y ; t

1+t 

@t

/ = 13

2 3

@t @t 1 + t 

19

@F F X; Y ; t

2

4tF F X; Y ; t 14  G X; Y ; t = 1 + t 

t

@t

@F F X; Y ; t

= at1 + X ax1t1 + Y ay1t1 +2at2t +2X ax1t2t +4XY ax1y1t 15

@t

2 2 2 2 2

+4X ax2t +2Y ay1t2t +4Y ay2t +3at3t +3X ax1t3t +3Y ay1t3t

3 3 3 2 3

+4at4t +4X ax1t4t +4XY ax1y1t +4X ax2t

3 2 3

+4Y ay1t4t +4Y ay2t

2 2 3

G X; Y ; t = ax1t1X 4ax1tX +2ax1t2tX 3ax1t1t X +3ax1t3t X 2ax1t2t X

t

3 4

+4ax1t4t X ax1t3t X

2 2 3

Y +3ay1t3t Y 2ay1t2t Y +ay1t1Y 4ay1tY +2ay1t2tY 3ay1t1t

3 4

+4ay1t4t Y ay1t3t Y

2 2 3 3 4

+at1 4at +2at2t 3at1t +3at3t 2at2t +4at4t at3t 16

4.2.1 Example and Explanation of Resultant Metho ds

Elimination of X; Y from this system of equations equation 11=0, equation 12=0, equa-

tion 16=0 expresses the resultant as the determinant of a matrix p olynomial in t.We

now showhow these systems are solved using elimination theory. Consider the three func-

tions G X; Y ; t, G X; Y ; t, G X; Y ; t which are prop ortional to the partial derivatives:

X Y t

@ E rrorX;Y;t @ E rrorX;Y;t @ E rrorX;Y;t

, , . Notice that G X; Y ; t, G X; Y ; t, G X; Y ; t are

X Y t

@X @Y @t

all quartic in t, but linear in X and Y . Resultants can b e constructed for any algebraic

system of equations, but they are simplest for cases in which the equations are linear in all

but a single variable. We can rewrite the three equations as linear functions of X ,Y , and t:

G X; Y ; t = f tX + f tY + f t=0

X 11 12 13

G X; Y ; t = f tX + f tY + f t=0

Y 21 22 23

G X; Y ; t = f tX + f tY + f t=0

t 31 32 33

Consider the matrix equation equation 17; in order for the partial derivatives to b e

T

simultaneously zero, X; Y ; 1 must b e in the null space of the resultant matrix refer equa-

tion 17. The resultant matrix has a null space if and only if the determinant of the

resultant matrix is zero. Furthermore, any solution to the system of equations must lie in

the null space of the resultant matrix. Wehave therefore derived a relationship b etween ro ots

of the system of equations, and a single expression in a single variable. If t is a co ordinate

20

of a simultaneous solution of this system of equations, then the determinant of the resultant

matrix as a function of t must b e zero. We can solve for all values of t using the resultant

matrix, by nding all t such that jResul tantM atr ixtj = 0 where jResul tantM atr ixtj

signi es the determinantof Resul tantM atr ixt and then we can solve for X and Y for

t t

each t.

2 3 2 3 2 3

G X; Y ; t f t f t f t X

X 11 12 13

6 7 6 7 6 7

6 7 6 7 6 7

6 7 = 6 7 6 7 17

G X; Y ; t f t f t f t Y

Y 21 22 23

4 5 4 5 4 5

G X; Y ; t f t f t f t 1

t 31 32 33

| {z }

ResultantM atr ix

Solving the system of equations is reduced to nding ro ots of the determinant of a matrix

p olynomial, where M t denotes the resultant matrix.

@ E rrorX; Y ; t @ E rrorX; Y ; t @ E rrorX; Y ; t

; ; g; fX; Y g M t = Resul tantf

@X @Y @t

= Resul tantfG X; Y ; t;G X; Y ; t;G X; Y ; tg; fX; Y g 18

X Y t

2 3

2 2 2

2ax2 +4ax2t ax1y1 +2ax1y1t ax1 + ax1t1t + ax1t2t

6 7

6 7

4 4 3 4

6 7

+2ax2t +ax1y1t ax1t3t + ax1t4t

6 7

6 7

6 7

6 7

6 7

6 7

2 2 2

ax1y1 +2ax1y1t 2ay2 +4ay2t ay1 + ay1t1t + ay1t2t

6 7

6 7

6 7

4 4 3 4

6 7

+ax1y1t +2ay2t ay1t3t + ay1t4t

6 7

=

6 7

6 7

6 7

6 7

6 7

ax1t1 4ax1t1t ay1t1 4ay1t1t at1 4at +2at2t

6 7

6 7

2 2 2 2 3

6 7

+2ax1t2t 3ax1t1t +2ay1t2t 3ay1t1t 3at1t +3at3t 2at2t

6 7

6 7

6 7

2 3 2 3 3 4

+3ax1t3t 2ax1t2t +3ay1t3t 2ay1t2t +4at4t at3t

6 7

4 5

3 4 3 4

+4ax1t4t ax1t3t +4ay1t4t ay1t3t

4.3 Solving the Matrix

M twas obtained equation 18 by eliminating X and Y from the equations corresp ond-

ing to the partial derivatives of E rrorX; Y ; t. In this section, we highlightanumerically

accurate and ecient algorithm to compute the ro ots of jM tj = 0. All of the t co ordinates

of the common solutions of the algebraic equations are ro ots of jM tj =0.

A simple pro cedure for ro ot nding involves expanding the symb olic determinant and com-

puting the ro ots of the resulting univariate p olynomial. This pro cedure involves a signi cant

amountofsymb olic computation in determinant expansion, which makes it relatively slow.

21

Moreover, the problem of computing ro ots of p olynomials of degree greater than 10 are can

be numerically ill-conditioned. As a result, the solutions obtained using IEEE oating p oint

arithmetic or any other mo del of nite precision arithmetic are likely to b e numerically

inaccurate. To circumvent these problems, we use matrix computations.

Solving jM tj = 0 is reduced to determining the eigenvalues of the blo ck companion

matrix E . The construction of the blo ck companion matrix E is highlighted in equation 19.

It involves matrix inversion and matrix pro ducts. In this section we assume that the leading

matrix, M ,isnumerically well-conditioned. The eigenvalues of E are exactly the ro ots of

4

the equations jM tj = 0 [19] computed using routines from LAPACK [2] and EISPACK [8].

Solving for the ro ots of M t in this manner is numerically b etter conditioned than solving

the resultant p olynomial formed by expanding the resultant matrix.

4 3 2

M t = M t + M t + M t + M t + M 19

4 3 2 1 0

2 3

0 I 0 0

6 7

6 7

6 7

0 0 I 0

6 7

E = 20

6 7

6 7

0 0 0 I

4 5

1 1 1 1

M M M M M M M M

3 2 1 0

4 4 4 4

4.3.1 Identifying Possibly Valid Solutions

Resultant formulations can generate extraneous solutions; i.e., ro ots of the resultant which

have no corresp onding solutions in the original system of equations. The normal approach

is to consider each ro ot of the resultant a candidate solution. The disadvantage of treat-

ing all solutions as candidates is that the resultant formalism generates complex solutions

even when only real solutions are desired. In this case of matching p oints to linear features,

the orientation t is exp ected to b e real. Furthermore, the resultant construction intro duces

multiple extraneous ro ots i; i; i; i, and this is seen by the fact that the symb olic resul-

2 2

tant's characteristic p olynomial is divisible by1+t  . We ignore complex solutions by

thresholding the imaginary comp onents.

4.3.2 Improving the Accuracy

The eigenvalue decomp osition dep ends up on the invertibility of the largest exp onent ma-

trix in this case, M is well conditioned and the inverse computation do es not lead to

4

ill-conditioned matrices. Sometimes, the highest degree matrix M is symb olically rank 4

22

de cient. In this case, we use an algebraic substitution s = f t, and generate a matrix

0 0

p olynomial M such that M s= 0 , M t = 0 with an invertible largest exp onent matrix

0 0 0 4 i i

M M = C oef f icient[M ;s ] where C oef f icient[E ;t ] describ es the co ecient of the t

4 4

1

 and term in the expression E . Two simple transformations are the geometric inverse t =

s

s+

a random generic rational transformation t = . We rst try the geometric inverse sub-

s+

1

which corresp onds to checking whether M is numerically well-conditioned. stitution t =

0

s

Let d be M 's p olynomial degree. If M t is not well conditioned, we instead try a

M 0

s+

numb er of generic rational transformations: t = , with random ; ; ; .Ifany of these

s+

0

transformations pro duces a well conditioned matrix M signi ed by its condition numb er, we

4

use the random transformation corresp onding to the b est conditioned matrix. We compute

0 0

the eigenvalues s of the related large matrix E the companion matrix of the M matrix,

0

to nd all of the s for which jM sj = 0. The t values for which jM tj = 0 are computed

1

by applying the inverse function f to the s values.

1

d

M

! M s=s M t t =

t=1=s

s

d d d

M

! C oef f icient[M ;s ]= C oef f icient[M; t ] 21

t=1=s

s +

d

M

t = ! M s+ s=M t s +  

t=

s+

s + 

If none of the rational transformations lead to numerically invertible largest exp onent ma-

trices, we can use the generalized eigenvalue formulation.

4.3.3 Generalized Eigenvalue Formulation

In case M is singular or numerically ill-conditioned and none of the transformations pro duces

4

awell conditioned matrix, we reduce ro ot nding to a generalized eigenvalue problem of a

matrix p encil [9]. The matrix p encil for the generalized eigenvalue problem is constructed

from the matrix p olynomial in the following manner. Let

3 2 3 2

0 I 0 0 I 0 0 0

7 6 7 6

7 6 7 6

7 6 7 6

0 0 I 0 0 I 0 0

7 6 7 6

22 C = C =

2 1

7 6 7 6

7 6 7 6

0 0 0 I 0 0 I 0

5 4 5 4

M M M M 0 0 0 M

0 1 2 3 4

and the eigenvalues of C t + C corresp ond exactly to the ro ots of jM tj =0.

1 2

23

4.4 Computing X and Y

t t

Given the value of t computed using eigenvalue routines, the corresp onding X and Y

t t

co ordinates of the ro ots of the algebraic equations are found in constant time by solving

@ E rrorX;Y;t @ E rrorX;Y;t

j = 0 and j = 0. The two linear equations in X and Y can b e solved

t t

@X @Y

using Cramer's rule.

4.4.1 Verifying That X ;Y ;t is a Lo cal Extremum

t t

The metho d highlighted ab ove is necessary but not sucient and can pro duce extraneous

solutions. Therefore, we need to verify that eachX ;Y ;t con guration is actually a lo cal

t t

extremum. This is accomplished by computing the absolute values of the partial derivatives

of the error function and comparing them to some  value, where the  should b e non-zero

to accommo date sensor noise and numerical error.

4.5 Example

In this section, we step through an example in order to illustrate howwe use resultants

to solve the non-linear least squares problem. Notice that we do not recompute G X; Y ; t,

x

G X; Y ; t, or G X; Y ; t online. G X; Y ; t, G X; Y ; t, or G X; Y ; twere intermediate

Y t x Y t

oine results used to construct the symb olic resultant matrix. In other words, the symb olic

resultant matrix is the compiled result of the elimination. For brevity,we present the inter-

mediary numeric values to four decimal places, though we use double precision arithmetic.

The resultant computation is comprised of seven steps:

1. Compute the symb olic co ecients using the data set

2. Construct the symb olic resultant using the symb olic co ecients

3. If necessary, apply transformations in order to nd a well conditioned invertible highest

exp onent matrix

4. Construct the companion matrix using the resultant matrix

5. Compute the eigenvalues of the companion matrix

6. If the algorithm applied a transformation in step 3, nd the ro ots of t by applying an

inverse function to the eigenvalues

7. Compute the remaining p ose parameters, X ,Y for each candidate, t

t t

24

Data p oint Linear Feature Algebraic Parameters

x ;x  a ;a ;a 

i;1 i;2 i;1 i;2 i;3

-7.91, -7.91 -0.0075345555 43 , 0.999971614 83 4, -9.004401406 730 

7.91, 7.91 -0.007534555 543 , 0.99997161483 4, 6.8050998252 07 

-7.91, 7.91 0.7001091991 57, 0.714035789 899 , -12.166817390 266 

7.91, -7.91 0.7001091991 57, 0.714035789 899 , 10.0506561249 62 

-7.91, -7.91 -0.7108618914 74, 0.703331622 529 , -11.545580060 073 

7.91, 7.91 -0.7108618914 74 , 0.703331622 52 9, 10.561258166 615 

Table 1: Example data set: data p oints and corresp onding algebraically parameterized linear

features. The task is to compute the transformation which optimally transforms the set of

p oints onto the corresp onding linear features.

4.5.1 Evaluating the Symb olic Co ecients

The data set of p oints and asso ciated linear features is given in Table 1. The symb olic co ef-

cients' values shown in Table 2 are computed from the p oints and features. They describ e

the co ecients of the sum of the algebraic error functions refer equation 23. We present

~ ~

FFX; Y ; t; 7:91; 7:91; 0:0; 0:0075; 1:000; 9:0044   in equation 24 and Figure 8.

~ ~

FFX; Y ; t = FFX; Y ; t; 7:91; 7:91; 0:0; 0:0075; 1:000; 9:0044 

~ ~

+FFX; Y ; t; 7:91; 7:91; 0:0; 0:0075; 1:000; 6:8041  

~ ~

+FFX; Y ; t; 7:91; 7:91; 0:0; 0:7001; 0:7140; 12:1668   + ::: 23

~ ~

FFX; Y ; t; 7:91; 7:91; 0:0; 0:0075; 1:000; 9:0044  = 24

2 3 4

1:3322 36:7938t + 292:9516t 537:2817t + 284:0768t

2 3 4

+2:3084Y 31:8766Yt+36:0166Yt 31:8766Yt +33:7082Yt

2 2 2 2 2 2 2 2

+0:99991 + t  Y +0:00011 + t  X 0:01511 + t  XY

2 3 4

0:0174X +0:2402Xt 0:2714Xt +0:2402Xt 0:2540Xt

25

Figure 8: The total error function E rrorY;  for this example

a 503.8720 at 985.2446 ax t 1.5305 ay t 0

4 1 4 1 1

at -2001.9285 ax 1.5305 ax 1.9911 ay t 17.6102

1 1 2 1 2

at 3506.1299 ax t 0 ax y -0.0304 ay t 8.8051

2 1 1 1 1 1 4

at -2972.5279 ax t 3.0610 ay 8.8051 ay 4.0089

3 1 2 1 2

Table 2: The values of the symb olic co ecients which are used in the resultant formulation:

FFX; Y ; t= a + at1t + :::

4.5.2 Constructing the Matrix Resultant M Using the Symb olic Co ecients

The resultant M t in equation 25 is generated from equation 18.

2 3

4 2

4 2

3:9821t +7:9643t +3:9821 0:0304t 0:0608t 0:0304; 0

6 7

6 7

4 2 4 2

M t=6 7 25

0:0304t 0:0608t 0:0304 8:0179t +16:0357t +8:0179 0

4 5

4 2 4 2

1:5305t +3:0610t +1:5305 8:8051t +17:6102t +8:8051 

4 3 2

 = 2972:5279t 3071:2816t 2911:7982t + 4996:7719t 2001:9285

4.5.3 Constructing the Companion Matrix E

At this p oint, we need to determine whether the highest exp onent matrix M is numerically

4

invertible. Wecheck its invertibilityby computing the condition numb er for a particular

random value of t in this case, 0.396465. M t =0:396465 equation 26 is obviously

4

not uniformly degenerate since the matrix is well conditioned for a random value of t.

2 3

5:3324 0:0407 0

6 7

6 7

M t =0:396465 = 6 7 26

0:0407 10:7365 0

4

4 5

2:0495 11:7907 596:5278

26

Computing the companion matrix E of M involves inverting the highest degree ma-

1

trix in this case M  and then left-multiplying all of the matrixes by M .We compute

4

4

1 1

M M :::M M and combine them with an identity matrix to construct E .

0 3

4 4

2 3 2 3

0:2511 0:0010 0 1 0 0

6 7 6 7

6 7 6 7

1 1

M = 6 7 ; M M = 6 7

0:0010 0:1247 0 0 1 0

0

4 4

4 5 4 5

0:0001 0:0004 0:0003 0 0 0:6735

2 3 2 3 2 3

0 0 0 2 0 0 0 0 0

6 7 6 7 6 7

6 7 6 7 6 7

1 1 1

M M = 6 7 ; M M = 6 7 ; M M = 6 7

0 0 0 0 2 0 0 0 0

1 2 3

4 4 4

4 5 4 5 4 5

0 0 1:6810 0 0 0:9796 0 0 1:0332

2 3

39 99

0 I

6 7

6 7

6 7

0 0 0 2 0 0 0 0 0 1 0 0

6 7

E =

6 7

6 7

0 0 0 0 2 0 0 0 0 0 1 0

4 5

0 0 1:0332 0 0 0:9796 0 0 1:6810 0 0 0:6735

4.5.4 Determining Ro ots of the Matrix Polynomial by Computing the Eigen-

values of Companion Matrix E

The eigenvalues of E are given in equation 27. Some of the 12 eigenvalues of E E is a

square 12  12 matrix are complex with imaginary comp onents >. This is due to the

fact that the algebraic formulation of the transformation MatX; Y ; t de nes a function for

complex values of t as well.

8 9

> >

> >

0.000000009726 + 1.000000009241 i 0.000000009726 + -1.000000009241i

> >

> >

> >

> >

> >

> >

-0.000000009726 + 0.999999990759 i -0.000000009726 + -0.999999990759 i

> >

> >

> >

> >

> >

< =

-1.231429745741 + 0.000000000000 i -0.000000005427 + 1.000000002712i

E ig env al uesE = 27

> >

> >

-0.000000005427 + -1.000000002712i 0.000000005427 + 0.999999997288 i

> >

> >

> >

> >

> >

> >

0.000000005427 + -0.999999997288 i 1.008279326594 + 0.000000000000 i

> >

> >

> >

> >

> >

: ;

0.628186277667 + 0.384444450582 i 0.628186277667 + -0.384444450582i

4.5.5 Computing X and Y for Candidate Solutions

t t

We compute the remaining p ose parameters X ;Y  for each candidate t, the real eigen-

t t

values, f1:231429745741 ; 1:0082793265 94 g. The values of X , Y , and E rrorX ;Y ;t for

t t t t

27

Typ e t X Y E rrorX ;Y ;t

t t t t

Lo cal Maximum -1.23142974 574 1 -0.392742825 743 -1.0996772716 38 2537.708141 328 48 9

Lo cal Minimum 1.008279326 59 4 -0.392742825 743 -1.0996772716 38 0.0474611611 51

Table 3: Lo cal extrema found by solving r E rrorX; Y ; t=0

t = 1:23142974574 1 and t =1:0082793265 94 are given in Table 3. The error function

E rrorX ;Y ;tat t =1:008279326 594 is extremely small, and we observe that it is a lo cal

t t

minimum from the signs of the second order partial derivatives of E rrorX ;Y ;t. The

t t

remaining candidates are disregarded b ecause they are complex. Since the global minimum

is guaranteed to b e one of the lo cal extrema, the p ose corresp onding to t =1:008279326 59 4

is in fact the global minimum.

The p ose is completed by transforming the t value into radians using equation 28. In

this case, t =1:008279326 594  = 1:5790415 radians.

1

t = 2 tan t 28

4.6 E ect of Incorrect Corresp ondence

In this section, we show that the algorithm computes an optimum p ose estimate even when

the corresp ondence information is incorrect. The algorithm computes the transformation

which maps the p oint set onto the feature set, regardless of the quality of the t b etween

the transformed p oint set and the feature set. To demonstrate its robustness, we applied the

lo calization algorithm on almost the same data set with one exception: the third p ointnow

corresp onds to the same linear feature as the rst two p oints refer Table 4. A slice of the

error function is depicted in Figure 9, and the p oses with extremal squared error are given

in Table 5.

Therefore, a high error residual at the lo cal minimawould indicate a wrong corresp on-

dence.

4.7 Example Which Demonstrates Problems of Lo cal Minima

The main advantage of using an algebraic approach to solve the non-linear least squares

problem is that it always returns the global minimum, not a lo cal minimum. Gradient

descent techniques can fail by returning a lo cal minimum.Table 6 shows a set of p oints and

linear feature parameters which induce a lo cal minimum of the error function. The error

function is depicted in Figure 10, and the lo cal minima are given in Table 7. In the case

28

Data p oint Linear Feature Algebraic Parameters

x ;x  a ;a ;a 

i;1 i;2 i;1 i;2 i;3

-7.91, -7.91 -0.007534555 543 , 0.99997161 483 4, -9.004401406 730 

7.91, 7.91 -0.007534555 543 , 0.99997161483 4, 6.8050998252 07 

-7.91, 7.91 -0.007534555543, 0.999971614834, 6.805099825207

7.91, -7.91 0.7001091991 57, 0.714035789 89 9, 10.050656124 962 

-7.91, -7.91 -0.7108618914 74, 0.703331622 52 9, -11.545580060 07 3

7.91, 7.91 -0.7108618914 74 , 0.703331622 52 9, 10.561258166 615 

Table 4: Incorrect data set: an incorrect corresp ondence is included to demonstrate the

lo calization technique's robustness.

Figure 9: The total error function E rrorY;  given the wrong corresp ondence informa-

tion. Notice that the minimum sum squared error is on the order of 82, inferring incorrect

corresp ondence.

of linear features, it is imp ossible for a single value of t to corresp ond to two lo cal minima

@ E rrorX;Y;t @ E rrorX;Y;t

j = 0 and j = 0 are linear equations in X and Y allowing b ecause

t t

@X @Y

only a single generic solution.

In the example given in section 4.5, notice that there are only four non-degenerate values of

t refer Table 7; in the example given in section 4.7, there are also only four non-degenerate

Typ e t X Y E rrorX ;Y ;t

t t t t

Lo cal Maximum -1.31819480 828 2 11.674225332 740 4.32304107505 9 1778.8481286 85 287

Lo cal Minimum 0.391742823 60 2 2.1158708974 68 1.40289337018 1 82.26241329 05 94

Table 5: Extremal error p oses found for the data set with the incorrect corresp ondences.

29

Mo del p oint Algebraic Parameters

x ;x  a ;a ;a 

i;1 i;2 i;1 i;2 i;3

1.0, 0.0 1.0, 0.0, 1.0

0.0, 1.0 0.0, 1.0, 1.0

-1.0, 0.0 1.0, 0.0, -1.0

0.0, -1.0 0.0, 1.0, -1.0

0.6, 0.87 3.06, 3.52, 4.09

Table 6: An example of p oints and corresp onding linear features which induce a lo cal minima

in the error function.

Figure 10: The total error function E rrorY;  for the example with lo cal minima

values of t.

4.8 Points and Rectilinear Features

Having gone through the examples for lo calizing generic p oints and linear features, wenow

describ e a lo calization algorithm optimized for p oints and rectilinear features. This tailored

algorithm is much faster than the general case algorithm b ecause the symb olic resultant can

b e simpli ed to a fourth order equation when all of the linear features are rectilinear, i.e.

Typ e t X Y E rrorX ;Y ;t

t t t t

Lo cal Minimum 0.1634562924 85 -0.0487294464 32 -0.05605478 80 52 0.022882658 439

Lo cal Minimum -0.159961402 498 -0.0947611993 66 -0.10900634 69 84 0.055519581 104

Lo cal Maxima -56.167085506 60 3 1.1600124423 20 1.3343933977 0 22.81716455 249 6

Lo cal Maxima -0.061241409 474 -0.1080716325 40 -0.12431769 49 47 0.059481427 413

Table 7: Lo cal extrema found by solving r E rrorX; Y ; t=0

30

either parallel or normal to each other. By redesigning the parts, or concentrating on only

the rectilinear edges, this sp ecial case can b e exploited to achieve real-time lo calization with

cheap er computer hardware.

Although parallel edges can b e represented in any manner, the description for which they

are all parallel either to the x axis or the y axis is particularly b ene cial b ecause some of

the symb olic co ecients b ecome redundant, and this redundancy allows us to simplify the

2

symb olic resultant. For example the ax1t2 co ecient for the Xt term in FFX; Y ; t is

exactly the sum of the ax1 co ecient for the X term in FFX; Y ; t and the ax1t4 co ef-

4

cient for the Xt term in FFX; Y ; t; although these redundancies are unintuitive, they

are observed by expanding the error b etween p oints and horizontal lines refer equation 6,

and recognizing them is b ene cial. The redundant formulation of FFX; Y ; t is given in

equation 29.

ax1t2 = ax1 + ax1t4; ay1t2 = ay1 + ay1t4; ax1y1 = ax1y1t2 = ax1y1t4 =0

2 3 4

FFX; Y ; t = a + at1t + at2t + at3t + at4t + ax1X + ax1t1Xt 29

2 2 3 4 2

+ax1Xt + ax1t4Xt + ax1t1Xt + ax1t4Xt + ax2X

2 2 2 4 2

+2ax2X t + ax2X t + ay1Y + ay1t1Yt+ ay1Yt

2 3 4 2 2 2 2 4

+ay1t4Yt + ay1t1Yt + ay1t4Yt + ay2Y +2ay2Y t + ay2Y t

In the case of rectilinear features, the resultant p olynomial is a twelfth degree p olynomial in

t, but it can b e factored into a fourth order p olynomial and eighth order p olynomial with

eight extraneous solutions, refer equation 30. Thereby rectilinear features can b e lo calized

faster b ecause the fourth order p olynomial in t can b e solved exactly in constant time [1].

jM tj

2 3 4

= a + a t + a t + a t + a t 30

0 1 2 3 4

2 4

1 + t 

a = 2  ax2  ay1  ay1t1 2  ax1  ax1t1  ay2 +4  at1  ax2  ay2

0

2 2 2

a = 4  ax2  ay1 2  ax2  ay1t1 4  ax2  ay1  ay1t4 +4 ax1  ay2

1

2

2  ax1t1  ay2 4  ax1  ax1t4  ay2 16  a  ax2  ay2

+8  at2  ax2  ay2

a = 6  ax2  ay1  ay1t1 6  ax2  ay1t1  ay1t4 +6 ax1  ax1t1  ay2

2

6  ax1t1  ax1t4  ay2 12  at1  ax2  ay2 +12  at3  ax2  ay2

2 2 2

a = 2  ax2  ay1t1 +4 ax2  ay1  ay1t4 4  ax2  ay1t4 +2  ax1t1  ay2 3

31

2

+4  ax1  ax1t4  ay2 4  ax1t4  ay2 8  at2  ax2  ay2

+16  at4  ax2  ay2

a = 2  ax2  ay1t1  ay1t4 +2  ax1t1  ax1t4  ay2 4  at3  ax2  ay2 31

4

5 Optimal Pose Determination for Mo dels with Circular and

Linear Features

In section 4 we describ ed the lo calization algorithm for p olygonal ob jects and ob jects with

linear features. In this section, we extend the algorithm to the larger class of generalized-

p olygonal ob jects: ob jects with b oundaries consisting of linear and circular features. Elim-

ination techniques are again used to solve the system of partial derivative equations. One

di erence b etween the two lo calization algorithms is that resultant metho ds are used twice

@ E rrorX;Y;t @ E rrorX;Y;t

for the circular features case b ecause the partial derivatives and are

@X @Y

nonlinear in X and Y .

5.1 Algorithm Overview

Resultant metho ds involve formulating the error function generically in terms of algebraic

co ecients, formulating the partial derivatives in terms of the algebraic co ecients, and

then constructing the symb olic resultant matrix. Resultant metho ds are used to eliminate

X and Y from the system of partial derivative equations to pro duce an equation solely in

, which can b e solved numerically. Then, resultant metho ds are used again to eliminate X

from a system of 2 partial di erential equations to pro duce an equation solely in Y . Then,

the remaining p ose parameter X is determined for each orientation and y translation

comp onent Y .

There are ve oine steps similar to p oints and linear features:

1. Determine the structure of a generic error function by examining an algebraic expres-

sion of the error b etween an arbitrary transformed p oint and an arbitrary circular

feature.

2. Express the error function E rrorX; Y ;  as a generic algebraic function in terms of

symb olic co ecients.

3. Formulate the partial derivatives of the generic error function E rrorX; Y ;  with

resp ect to the co ordinates: X , Y , and . The motivation is that each zero-dimensional

~

p ose with lo cal extremum error satis es rE rrorX; Y ; =0.

32

~

4. Eliminate X and Y from the system of partial derivatives rE rrorX; Y ; =0in

order to generate a expression solely in . The result of this step is a symb olic resultant

matrix which is used to solve for all of the of p oses with lo cal extrema error function.

Given the orientation, , the remaining p ose parameters X , Y can b e determined by

solving a system of linear equations. .

@ E rrorX;Y; 

5. Eliminate X from the system of two partial derivative equations = 0 and

@X

@ E rrorX;Y; 

= 0 to pro duce an equation solely in Y , which can b e solved numerically.

@Y

@ E rrorX;Y; 

=0. The remaining p ose parameter X is computed numerically using

@X

In addition to the ve oine steps, there are three online steps:

1. Instantiate the symb olic co ecients of the error function using the data set of p oints

and asso ciated linear features.

2. Compute all of the interesting orientations by solving the resultant matrix p olynomial

using eigenvalue techniques refer section 4.3

^ ^ ^

3. 8 ,if is a candidate orientation Im   0:

^

a Compute all of the interesting Y by solving the resultant matrix p olynomial using

^

eigenvalue computations describ ed in section 4.3.

^ ^ ^

b 8 Y ,ifY is a candidate Y translation ImY   0:

^ ^ ^

@ E rrorX;Y; 

i. Compute for X using j = 0 which is cubic in X

^ ^ ^ ^ ^

Y ; ;Y Y ;

@X

^ ^ ^

^

ii. Compute E rrorX ;Y ; ateach lo cal extremum in order to nd the global

^ ^

Y ;

^

minimum error p ose.

5.2 Approximating the Error Between Points and Circular Features

Unfortunately, resultant techniques cannot e ectively minimize the squared error b etween

p oints and circular features. In this section, we prove that there is no algebraic rational

p olynomial expression of the p oint p osition x; y  which describ es the squared minimum dis-

tance b etween a p oint and a circle. This is shown by the following example: consider a p oint

~

x; 0 on the x-axis and circle C centered at the origin with radius R see Figure 11a. The

~

squared minimum distance b etween the p oint x; 0 and the circle C is shown in Figure 11b.

The slop e discontinuity of the squared minimum distance at x = 0 proves that no algebraic

33

rational p olynomial in x alone can describ e the square of the minimum distance b etween a

circle and a p oint.

The minimum distance b etween a p oint and a circle can b e describ ed by a mo di ed system

of algebraic p olynomials byintro ducing an additional variable denoting a p oint on the circle

C . But intro ducing a new variable for each circular feature in that manner would prohibit

the use of a single generic symb olic resultant matrix. Furthermore, intro ducing these extra

degrees of freedom would increase the complexity and therefor the running time of the

eigenvalue computations.

Error 2 (x) 222 F (x) = x - r ( 2r ) R

(x,0)

Circle C

X X

a b c

~

Figure 11: a: Consider a p oint x; 0 on the x-axis and a circle C centered at the origin

~

of radius R. b: The squared minimum distance b etween x; 0 and C as a function of x.

Notice the slop e discontinuityat x = 0 which proves that the squared minimum distance

between a p oint and a circle cannot b e written as a algebraic rational p olynomial function in

the p oint p osition x; y . c: The function F x approximates the squared minimum error

for p oints nearby circular features.

2 2

x r

2

 to approx- For these reasons, we employ a rational p olynomial function F x= 

2r

imate the squared error function b etween p oints and circular features see Figure 11c.

This approximation should suce when the data p oints are nearby the circular features.

5.3 Total Squared Error Function

In this section, we derive the squared error b etween p oints and circular features. We show

that the squared error b etween a set of p oints and a set of linear and circular features can

b e expressed as a fourth order rational p olynomial function in t.

E rror X; Y ;  = E rror X; Y ; +E rror X; Y ;  32

total linear f eatur es cir cular f eatur es

34

X X

= E rror X; Y ; ; ~x ;~a  + E rror X; Y ; ; ~x ;~c ;r 

linear i i cir cular i i i

i i

| {z } | {z }

linear f eatur e er r or cir cular f eatur e er r or

The error b etween a transformed p oint T X; Y ; t~x and a circle C is exactly:

T 2

E rror X ; Y ; t; ~x; ~c; r  = kT X; Y ; X; Y; 1 c ; c ; 1k r = 33

cir cular x y

T 2 T

kT X; Y ; ~x ~ckr  F T X; Y ; ~x ~c; r = 34

T 2 2 2 2 2 T 2 2

kMatX; Y ; t~x 1 + t ~ck 1 + t r  kT X; Y ; ~x ~ck r

2

 =  35

2 2 2

2r 1 + t  2r 

X X

+ 36 E rror X; Y ; ; ~x ;~c ;r  E rror X; Y ; = E rror X; Y ; ; ~x ;~a 

cir cular i i i total linear i i

i i

{z } {z } | |

linear f eatur e er r or cir cular f eatur e er r or

T T 2

X X

kT X; Y ; ~x ~c k r

2 T 2

 37  k~a  T X; Y ; ~xk + 

2r

P

P

1

2 2 2 2 2

T 2

kMatX; Y ; t~x 1 + t ~ck r1 + t  

k~a  MatX; Y ; t~xk

2

4r

= + 38

2 2 2 4

1 + t  1 + t 

2 2 2 2 2 2 2

It turns out that kMatX; Y ; t~x 1 + t ~ck r1 + t   is divisible by1+t  due to the

2

2 2 2

fact that cos + sin = 1. By factoring 1 + t  out of the numerator and denominator of

E rror X; Y ; t, we arrive at a quartic p olynomial expression. The linear feature error

cir cular

2 2

function and the factored circular feature error function have equal denominators 1 + t  

which enables the error functions to b e easily added by adding the numerators of the error

functions. The squared error b etween p oints and linear features was derived in section 4.

We end up with an error function E rror X; Y ; t which is a quartic function in t divided

total

2 2

by1+t  .

2 2

P

P kMatX;Y;t~x1+t ~ck

1

2 2

2

 1 + t r 

2 2 2

k~a  MatX; Y ; t~xk

4r 1+t 

E rror X; Y ; t = + 39

total

2 2 2 2

1 + t  1 + t 

2 2

FF X; Y ; t = E rror X; Y ; t1 + t  40

total total

Once again, we use an algebraic function FF X; Y ; t to describ e the sum squared error

total

between p oints and linear and circular features, where FF X; Y ; t= E rror X; Y ; t

total total

@ E rror X;Y;t @ E rror X;Y;t @ E rror X;Y;t

total total total

, , , sp ecify a sys- equation 41. The partials

@X @Y @t

tem of equations de ning the lo cal extrema of the error function including the global mini-

mum.

2 2 2 3

FF X; Y ; t=a + X ax1 + XY ax1y1 + XY ax1y2 + X ax2 + X Y ax2y1 + X ax341 total

35

2 3 2 2

+Y ay1 + Y ay2 + Y ay3 + tat1 + Xtax1t1 + XY tax1y1t1 + XY tax1y2t1 + X tax2t1

2 3 2 3 2

+X Ytax2y1t1 + X tax3t1 + Ytay1t1 + Y tay2t1 + Y tay3t1 + t at2

2 2 2 2 2 2 2 2

+Xt ax1t2 + XY t ax1y1t2 + XY t ax1y2t2 + X t ax2t2 + X Yt ax2y1t2

3 2 2 2 2 3 2 3 3

+X t ax3t2 + Yt ay1t2 + Y t ay2t2 + Y t ay3t2 + t at3 + Xt ax1t3

3 2 3 2 3 2 3 3 3

+XY t ax1y1t3 + XY t ax1y2t3 + X t ax2t3 + X Yt ax2y1t3 + X t ax3t3

3 2 3 3 3 4 4 4

+Yt ay1t3 + Y t ay2t3 + Y t ay3t3 + t at4 + Xt ax1t4 + XY t ax1y1t4

2 4 2 4 2 4 3 4

+XY t ax1y2t4 + X t ax2t4 + X Yt ax2y1t4 + X t ax3t4

4 2 4 3 4 2 2 2 2 2

+Yt ay1t4 + Y t ay2t4 + Y t ay3t4 +1+t  X + Y  ax4y4

In the case of generalized-p olygonal ob jects, some of the algebraic co ecients are redun-

dant. We note these redundancies b ecause they led to symb olic simpli cation.

ax3t2 =2 ax3 - ay3t1 ax3t3 = ax3t1 ax3t4 = ax3 - ay3t1 ax1y2t1 = ax3t1

ay3t2 =2ay3 + ax3t1 ax1y2t3 = ax3t3 ax1y2t4 = ax3t4 ax1y2t2 = ax3t2

ay3t4 = ay3 + ax3t1 ay3t3 = ay3t1 ax2y1t1 = ay3t1 ax2y1t2 = ay3t2

ax2y1t3 = ay3t3 ax2y1t4 = ay3t4

Again, the global minimum is found by nding all of the lo cal extrema and nding the lo cal

extrema with minimum error. All of the lo cal extrema are found by solving for simultaneous

solutions to the zeros of the partial derivatives of the error function refer equation 42.

rE rror X; Y ; =0; 0; 0 rE rror X; Y ; t=0; 0; 0 42

total total

Again, since we are only nding the common ro ots of equation 42, we are at lib erty to use

@ E rror X;Y;t @ E rror X;Y;t

total total

substitute any function H X; Y ;  for with the proviso that =

X

@X @X

0 ! H X; Y ;  = 0. Such a substitution is necessary b ecause we will b e using resultant

X

techniques which require algebraic not rational functions.

@ E rror X;Y;t @ E rror X;Y;t

total total

and , H X; Y ; t and H X; Y ; t are equal to the partial For

X Y

@X @Y

@F F X;Y;t @F F X;Y;t

total total

derivatives ; . H X; Y ; t is slightly more complicated b ecause the

t

@X @Y

@ E rror X;Y;t

total

denominator of is a function of t, and this must b e taken into account.

@t

@F F X; Y ; t @ E rror X; Y ; t

total total

/

@X @X

@F F X; Y ; t

total

H X; Y ; t =

X

@X

@ E rror X; Y ; t @F F X; Y ; t

total total

/

@Y @Y

36

@F F X; Y ; t

total

H X; Y ; t =

Y

@Y

FF X;Y;t

total

@F F X;Y;t

2

total

@

2 2

1 + t  4tF F X; Y ; t

@ E rror X; Y ; t

total

total 1+t 

@t

43 / =

2 3

@t @t 1 + t 

@F F X; Y ; t

total

2

H X; Y ; t = 1 + t  4tF F X; Y ; t 44

t total

@t

The resultant of this system of equations is obtained by eliminating X and Y using the

Macaulay construction [18 ].

@ E rror X; Y ; t @ E rror X; Y ; t @ E rror X; Y ; t

total total total

M t = Resul tantf ; ; g; fX; Y g

@X @Y @t

= Resul tantfH X; Y ; t;H X; Y ; t;H X; Y ; tg; fX; Y g 45

X Y t

5.4 Solving for Orientation, t, of Lo cal Extrema Poses

The resultant of the system of equations: rE rror X; Y ; t=0 isa 36 36 matrix

total

with rank 34. Furthermore, we can eliminate two rank-de cientrows and many redundant

singleton rows rows containing only one non-zero element. Symb olic manipulation pro duces

a26 26 matrix with three singleton rows. The ro ots of the 26  26 matrix p olynomial are

computed using the eigendecomp osition algorithms describ ed in section 4.3.

The numerical precision of the resultant approach dep ends on the resultant matrix having

full rank. Rank de ciency can result from redundant co ecients. The 26  26 construction we

describ e is designed for combinations of linear and circular features. If there are only circular

features, we need to reexamine the symb olic resultant to eliminate any remaining rank

de ciencies. For eigenvalue computations, rank de ciency results in numerical imprecision.

For robustness, we generate b oth resultant formulations, one assuming linear features are

included, and one assuming linear features are not included, and use the appropriate matrix

to compute the orientation t.

5.5 Solving for Y Position, Y , of Lo cal Extrema Poses

t

To compute Y after solving for t,we again use resultant metho ds since the remaining

t

equations are nonlinear in X and Y . The resultant is constructed using the equations:

@ E rror X;Y;t @ E rror X;Y;t @ E rror X;Y;t

total total total

j = 0 and j =0.At a particular orientation t, j =

t t t

@X @Y @X

@ E rror X;Y;t

total

0 can b e written as a function U X ;Y  = 0, and j = 0 can b e written as a

t t t

@Y

function V X ;Y = 0. U X ;Y  = 0 is cubic in X and V X ;Y  = 0 is quadratic in X .

t t t t t t t t

3 2

U X ;Y =0 , u Y X + u Y X + u Y X + u Y =0 46

t t 3 t 2 t 1 t t 0 t

t t

2

V X ;Y =0 , v Y X + v Y X + v Y =0 47

t t 2 t 1 t t 0 t t

37

We use the Macaulay resultant to eliminate X and pro duce a function in Y refer equa-

t t

tion 48 [18 ]. This determinant of the resultant matrix is a third-order matrix p olynomial

in Y . This construction is more complex than the construction for the case of linear equa-

t

tions refer section 4.2.1. We compute the ro ots of Y at simultaneous solutions using the

t

eigenvalue decomp osition metho d we previously outlined see section 4.3.

2 3

u Y  0 v Y  0 0

0 t 0 t

6 7

6 7

6 7

u Y  u Y  v Y  v Y  0

1 t 0 t 1 t 0 t

6 7

6 7

6 7

Resul tantfU X ;Y ;V X ;Y g;X = 48

u Y  u Y  v Y  v Y  v Y 

t t t t t

2 t 1 t 2 t 1 t 0 t

6 7

6 7

6 7

u Y  u Y  0 v Y  v Y 

6 7

3 t 2 t 2 t 1 t

4 5

0 u Y  0 0 v Y 

3 t 2 t

5.6 Solving for X Position, X ,of Lo cal Extrema Poses

Y ;t

t

@ E rror X;Y;t

total

Given Y and t, j = 0 is a in X which can b e solved nu-

t t Y ;t

t

@X

@ E rror X;Y;t @ E rror X;Y;t

total total

j = 0 is used, rather than j = 0 which is quadratic merically.

t t

@X @Y

@ E rror X;Y;t

total

in X , in case j is indep endentof X .

Y ;t t Y ;t

t t

@Y

5.7 Verifying X ;Y ;t is a Lo cal Extremum

Y t

t

Resultant metho ds can pro duce extraneous solutions b ecause they are necessary but not

sucient condition. This particular metho d can generate 26  4  5  3 con gurations. Once

again, we need to verify that eachX ;Y ;t con guration is a lo cal extremum by computing

Y t

t

the absolute values of the partial derivatives and comparing them to some  threshold which

accommo dates sensor noise and numerical error.

5.8 Example

In this example, consider the case of the four p oints corresp onding to two circles as shown

in Figure 12. One p ose which exactly maps the four p oints onto their corresp onding features

is given in Figure 13. The four p oints are: 2; 3; 1; 2; 2; 1; 2; 3, and the two

2 2 2 2

asso ciated circles are x +2 + y =1; x 2 + y = 1; the rst four p oints all corresp ond

to the rst circle.

The total error function b etween the transformed p oints and the asso ciated circular fea-

tures is given in Figure 14 as a function of y and t. The global minimum of the error function



is zero at = , Y = 2, which corresp onds to mapping the vertices directly onto the cor-

2

resp onding circular features. The algorithm found two distinct lo cal minima, the global

minima and also X = 2:1629;Y =0:0915; =  . 38

A.(−2,3)

A.(−1,2) A. (x+2)^2 + y^2= 1 B. (x−2)^2 + y^2= 1 A.(−2,1) Y

X X

B.(−2,−3)

Circular Features Corresponding Points

Figure 12: A simple example used to illustrate the algorithm for p oints and circular features

6 Implementation and Performance

In this section, we describ e the p erformance of the lo calization algorithm on many cases.

Wehave applied the algorithm on randomly generated data for large data sets as well as

investigated the relationship b etween camera resolution and accuracy using simulated data.

We used the lo calization algorithm to compare the lo calization p erformance of using all

available b oundary information as compared to just utilizing isolated b oundary p oints, and

we exp erimentally tested the technique for small data sets using real data.

In the rst case, we generated random sensed data, consisting of random features and

then random feature p oints. To simulate the random p ositions, all of the feature p oints were

transformed by a random rigid two-dimensional transformation. Finally,we ran the lo cal-

ization on the random features and transformed feature p oints and compared the estimated

transformation with the actual transformation.

For the second suite of exp eriments, weinvestigated the relationship b etween pixel resolu-

tion and lo calization accuracy by lo calizing ob jects in known p oses at di erent resolutions.

We b egan with simple p olygonal mo dels representing the ob jects' b oundaries, and moved

the mo dels into random p ositions: x; y ; . For each p ose and resolution, weenumerated

all of the pixel squares covered by the b oundary features and listed these corresp onding

features with each pixel. Finally,we compared the estimated transformations with the ac-

tual transformations for di erent pixel resolutions to understand the implications of camera resolution. 39 X X=0,Y=2,t =1 Y A.(−1,2) =>(−2,1)

A.(−2,3) =>(−3,0) A.(−2,1) X B.(−2,−3) ,=> (3,0) =>(−1,0)

A. (x+2)^2 + y^2= 1 B. (x−2)^2 + y^2= 1

Figure 13: One solution transformation which maps the four p oints onto the corresp onding

circular features.

Figure 14: The total error function E rrorY;  for the four p oints and asso ciated circular

features.

6.1 Verifying the Technique With Random Features

6.1.1 Generating Random Point and Linear Feature Data

Point and corresp onding linear feature data edges were synthesized as shown in Figure 15:

1. Random Linear Feature Edge Generation:

a Randomly cho ose the edge's direction from a uniform distribution [0 :::2 ].

b Randomly cho ose the edge's distance from the origin from a uniform distribution

[min :::max ].

distance distance 40

θ

l Y θ min max l 0 X

(a) Choose (b) Choose edge direction distance from Construct Line the origin 1. Generate Edge

r Y

min max r X 0

(b) Choose (a) Choose radius point from two intersections of edges and circle 2. Generate Point on Edge

θ

(b) Choose (a) Choose distance from (c) Perturb point perturbation Gaussian direction distribution

3. Perturb Point

Figure 15: Metho d for constructing random p oint and linear feature data

2. Random Point Sampling from Linear Feature Edge:

a Construct a circle centered at the origin of random radius by randomly cho osing

a radius r from a uniform distribution [min :::max ].

r adius r adius

b Find the p oint on the edge byintersecting the edge with the circle formed in step

a.

3. Random Gaussian Perturbation of Data Point:

a Randomly cho ose a p erturbation to the p oint, bycho osing the orientation and

length of the p erturbation: orientation from a uniform distribution [0 :::2 ],

41

length from normal distribution with mean 0 and standard deviation  .

min was 0.0, max was 10.0, min was 10.0, max was 15.0, and  was 0:1.

leng th leng th r adius r adius

6.1.2 Generating Random Point, Circular Feature Data

Point and corresp onding linear feature data and corresp onding circular feature data were

synthesized by generating p oint and linear feature data as describ ed in section 6.1.1, and

generating p oint and circular feature data as shown in Figure 16:

minX maxX maxY

r

minY min max 0 (a) Choose (b) Choose radius circle’s center 1. Generate Circle

θ θ r

(b) Choose (a) Choose point from two point on circle intersections of edges and circle 2. Generate Point on Circle

θ

(b) Choose (a) Choose distance from (c) Perturb point perturbation Gaussian direction distribution

3. Perturb Point

Figure 16: Metho d for constructing random p oint and circular feature data

1. Random Circular Feature Generation:

42

a Randomly cho ose the center of the circle from a square bycho osing random x

and y p ositions from uniform distributions: [min :::max ].

coor dinate coor dinate

b Randomly cho ose the circle's radius r from a uniform distribution: [min :::max ].

r adius r adius

2. Random Point Sampling from Circular Feature:

a Randomly cho ose the p oint on the circle p erimeter from a uniform distribution

[0 :::2 ].

3. Random Gaussian Perturbation of Data Point:

a Randomly cho ose a p erturbation to the p ointbycho osing the orientation and

length of the p erturbation: orientation from a uniform distribution [0 :::2 ],

length from normal distribution with mean 0 and standard deviation  .

min was -10.0, max was 10.0, min was 3.0, max was 15.0, and

coor dinate coor dinatetest r adius r adius

 was 0:1.

6.1.3 Results

Table 8 compares the estimated p oses with the actual p oses for the randomly generated

data sets of p oints and linear features and circular with p erfect sensing  =0:0. Table 9

compares the estimated p oses with the actual p oses for the randomly generated data sets

of p oints and linear features with  =0:1. Table 10 compares the estimated p oses with the

actual p oses for the randomly generated data sets of p oints and linear and circular features

with  =0:1.

6.2 Relationship Between Pixel Resolution and Lo calizat ion Accuracy

Most of the work in machine vision is camera-based and involves high precision, high

data bandwidth sensors. This lo calization algorithm is useful for edge-detection based ap-

proaches b ecause it enables the user to utilize all the edge data. In regards to edge-detection

based approaches, wewanted to investigate the relationship b etween camera resolution and

lo calization accuracy.

There are many factors which a ect the quality of lo calization estimates, such as lens

distortion, lighting, surface re ectance, etc. In these exp eriments, we concentrated only on

the relationship b etween pixel resolution and lo calization accuracy in order to gain a clear

43

Random Point and Linear and Circular Feature Data

 linear features  circular features Actual p ose Estimated p ose

8 0 X =2;Y =2; =0:7 X =2:0;Y =2:0; =0:7

8 0 X = 3;Y =2; =0:8 X = 3:0;Y =2:0; =0:8

8 0 X = 1;Y =2; =0:9 X = 1:0;Y =2:0; =0:9

8 0 X =4;Y =6; =1:0 X =4:0;Y =6:0; =1:0

4 4 X =2;Y =2; =0:7 X =2:0;Y =2:0; =0:7

4 4 X = 3;Y =2; =0:8 X = 3:0;Y =2:0; =0:8

4 4 X = 1;Y =2; =0:9 X = 1:0;Y =2:0; =0:9

4 4 X =4;Y =6; =1:0 X =4:0;Y =6:0; =1:0

0 8 X =2;Y =2; =0:7 X =2:0;Y =2:0; =0:7

0 8 X = 3;Y =2; =0:8 X = 3:0;Y =2:0; =0:8

0 8 X = 1;Y =2; =0:9 X = 1:0;Y =2:0; =0:9

0 8 X =4;Y =6; =1:0 X =4:0;Y =6:0; =1:0

Table 8: Performance of p oint,linear and circular feature lo calization technique for ran-

domly generated data with  =0.

Random Point and Only Linear Feature Data

 of random data p oints Actual p ose Estimated p ose

100000 X =2;Y =2; =0:7 X =1:998597;Y =2:003240; =0:700061

100000 X = 3;Y =2; =0:8 X = 3:001527;Y =2:002663; =0:800075

100000 X = 1;Y =2; =0:9 X = 1:001533;Y =2:002073; =0:900087

100000 X =4;Y =6; =1:0 X =3:998578;Y =6:001493; =1:000099

1000000 X =2;Y =2; =0:7 X =1:999183;Y =2:000571; =0:699966

1000000 X = 3;Y =2; =0:8 X = 3:000826;Y =2:000516; =0:799959

1000000 X = 1;Y =2; =0:9 X = 1:000823;Y =2:000461; =0:899951

1000000 X =4;Y =6; =1:0 X =3:999190;Y =6:000408; =0:999945

Table 9: Performance of p oint,linear feature lo calization technique for randomly generated

data for  =0:1.

44

Random Point and Linear and Circular Feature Data

 linear  circular Actual p ose Estimated p ose

features features

100000 100000 X =2;Y =2; =0:7 X =2:000344;Y =2:001496; =0:699958

100000 100000 X = 3;Y =2; =0:8 X = 2:999460;Y =2:001097; =0:799954

100000 100000 X = 1;Y =2; =0:9 X = 0:999193;Y =2:000738; =0:899951

100000 100000 X =4;Y =6; =1:0 X =4:001135;Y =6:00432; =0:999948

1000000 1000000 X =2;Y =2; =0:7 X =2:000105;Y =1:999681; =0:699999

1000000 1000000 X = 3;Y =2; =0:8 X = 2:999905;Y =1:999641; =0:799999

1000000 1000000 X = 1;Y =2; =0:9 X = 0:999903;Y =1:999603; =0:900003

1000000 1000000 X =4;Y =6; =1:0 X =4:000110;Y =5:999568; =1:000005

Table 10: Performance of p oint,linear and circular features lo calization technique for ran-

domly generated data  =0:1.

understanding. The term pixel refers to an individual sensor in the sensor matrix. We

assume that the resolution of the data pro duced by the camera is limited by the pixel size.

The term pixel resolution refers to the area in the scene in the neighb orho o d of a particular

ob ject characterized by a single pixel.

We b elieve that many systems rely on comparing isolated p oints on b oundary curves, such

as vertices and in ection p oints, b ecause there are easily repro ducible published algorithms

for eciently solving this problem. The purp ose of these exp eriments is to determine the how

signi cantly the lo calization precision improves by utilizing all of the available data, rather

than only isolated b oundary p oints. Wewanted to determine whether utilizing all of the edge

data was signi cantly b etter compared to using only isolated p oints on b oundary curves; it

was intuitive to us that using more of the available data always improves p erformance. We

found a distinct advantage to using all of the available information in that our lo calization

technique provided not insigni cantly b etter estimates.

The exp eriments consisted of lo calizing a known ob ject from a library of four mo deled

ob jects in a known p ose, and recording the variation b etween the actual and estimated p ose

parameters. These exp eriments involved generating simulated data for p olygonal mo dels in

multiple p oses for multiple camera resolutions. We used simulated data in order to fo cus on

the relationship b etween pixel resolution and lo calization accuracy.We assumed a mo del of

square pixels. The data p oints were of the form: , and

45

the data was synthesized byenumerated the centers of all pixel squares whichoverlapp ed

any of the mo deled b oundary features refer Figure 17. A picture of the pixelization of the

hex-mo del ob ject is shown in Figure 19.

(a) (b)

(d) (d)

Figure 17: Simulated data was synthesized byenumerating the centers of all of the square

pixels crossed by the b oundary features

Four di erent p olygonal mo dels were used, and each p olygonal mo del was lo calized in

ten di erent p oses at ve di erent pixel resolutions. In order to characterize the results, we

measured the errors in x; y ; in a manner similar to standard deviations. In other words,

we summed up the squares of the discrepancies b etween the actual and estimated p ose

parameters for x; y ; , and computed the square ro ot of the average. Tocharacterize the

average error, we simply averaged the errors returned by the optimal t returned by the

lo calization technique. These error measurements are rep orted in Table 11.

Table 11

The results from using only data p oints corresp onding to the vertices are shown in Table 12.

The most imp ortantvariable for p olygons is b ecause the x and y p ose parameters are

linear functions and can b e easily solved given . Notice that the p ose parameter estimates

app ear to shrink by a factor b etween three and four each time the resolution is doubled.

The x; y ; values do stop converging at very high resolutions, i.e., for most of the ob jects,

the p ose parameters x; y acquiesce to within 0.01 units of translation 0.02 of the ob ject 46 Hexthing Fiducial Mark (0.0,36.6) (0.0,26.5) (−6.3,21.3) (6.3,21.3)

(0.0,0.0) (−6.3,−21.3) (6.3,−21.3) (0.0,−26.5)

(−26.6,−16.6) (26.6,−16.6) Letter T Number 6

(0.0,0.0) (50.0,0.0) (10.0,40.0) (20.0,40.0) (20.0,30.0)

(−10.0,20.0) (0.0,−20.0) (50.0,−20.0) (0.0,10.0) (60.0,10.0)

(0.0,0.0) (50.0,0.0)

(10.0,−50.0) (40.0,−50.0)

(0.0,−30.0) (50.0,−30.0) (10.0,−60.0) (40.0,−60.0)

(−10.0,−40.0) (60.0,−40.0)

Figure 18: Four ob jects: Hex-mo del, Fiducial, Letter T, Numb er 6. Almost all of the vertex

p ositions are given, but including the vertex p ositions for the letter T would clutter the

gure without adding any additional information

length, and the orientational parameter app ears to acquiesce to a precision of 0.0001

radians. Also notice that the sum squared error shrinks by a factor of two each time the

resolution is doubled, implying that the squared error p er datap oint shrinks by a factor of

four each time the resolution is doubled.

Next, weinvestigated the p erformance degradation related to only utilizing isolated b ound-

ary p oints, rather than all of the available b oundary information. To accomplish this, we

generated data p oints We did not utilize the published algorithms for estimating p ose from

p oint sets, but instead used the lo calization technique with only the data p oints from vertices.

All such data p oints are easily found by mo difying the ob ject mo del, and then synthesizing

the data in exactly the same manner. The mo del was mo di ed by replacing each edge by

extremely small edges at the endp oints. In this way, the data synthesizer only generates

47

Figure 19: The simulated pixel data for the hex-mo del ob ject at p osition 7; 3 at orien-

tation -0.32 radians at resolution 1 pixel unit

data p oints for the vertices. This approach do es not give the exact same answer as using the

commonly used p oint set algorithm b ecause the error function of the lo calization technique

is not the squared distance b etween the transformed mo del vertex and the actual vertex,

but the sum of the two squared errors b etween the transformed actual vertices and the two

linear mo del features. The statistics for using only the vertices for the hex-mo del ob ject are

given in Table 12.

Weinterpret these results from Table 12 in comparison to lo calization exp eriments using

only the corner data. The most imp ortantvariable for p olygons is b ecause the x and

y p ose parameters are linear functions and can b e easily solved given . At the sharp est

resolution, the estimates for the lo calization technique are 20 times more accurate than

the vertex-based lo calization technique. Furthermore, errors in x; y are larger for the vertex

case at every resolution save one.

The only error statistic which do es not di er signi cantly b etween the two approaches is

the squared error. On a p er-data p oint basis, the squared error of the lo calization technique

5:42

at the sharp est resolution approximately 0.01 is very similar to the squared error

614:6

0:09

. We found that the p ose estimates were p er data p oint of the vertex based metho d,

17

more accurate using more data p oints, whichisanintuitive result. It is confusing to note

that the sum squared errors are relatively equivalent; one explanation is that since there are

48

fewer data p oints,avertex-based approach more easily ts the data, even though it do es not

pro duce a more precise result.

7 Conclusion

7.1 Conclusion

In this pap er, we describ ed an ob ject lo calization technique for sensor and mo del features

of di erenttyp es, such as p olygonal mo dels, and prob e p oints. The approachinvolved re-

ducing the p ose estimation problem to a least squares error problem. We utilized algebraic

elimination techniques to exactly solve the least squares error problems. The main advan-

tages of this lo calization algorithm are its applicability b oth to sparse and dense sensing

strategies, its immunity to lo cal minima problems, its exact computation of the global min-

imum, and its high sp eed. We used this technique to compare the relative p erformance of

using only a subset of the data interesting features suchasvertices, with using the entire

data set, and we found the intuitive result that utilizing more data provides a more pre-

cise p osition estimate. This algorithm has b een successfully applied it for recognition and

lo calization applications using very precise optical sensors.

7.2 Future Work

This approach is extendible to three-dimensional lo calization from three-dimensional range

data. This would involve computing generic error functions b etween p oints and features of

co-dimension one planes, surfaces as a function of the six degrees of freedom. After the

generic error functions have b een determined, all we need to do is construct larger resultant

matrices to solve for the minima. The resultant solving may take a slightly longer time owing

to the greater numb er of degrees of freedom and algebraic complexity. This approachmay

also b e extendible to three-dimensional lo calization from two-dimensional p ersp ective image

data. In this case, we can consider the image p oints to b e rays going through the eye, and

compute the disparitybetween these rays and mo del edges in three dimensions. The crucial

step involves succinctly formulating the generic error b etween pairs algebraically. Again,

resultant solving may take a longer time, owing to the greater numb er of degrees of freedom

and algebraic complexity.

Acknowledgements

The authors wish to acknowledge John Canny for helpful discussion, and Ruth Rosenholtz,

Dieter Koller, Je Wendlendt, and Clark Olson for their comments.

49

References

[1] Standard Mathematical Tables. Chemical Rubb er Company, 1973.

[2] Anderson, E., Bai, Z., Bischof, C., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., Sorensen, and

D. LAPACK User's Guide, Release 1.0. SIAM, Philadelphia, 1992.

[3] N. Ayache and O. D. Faugeras. Hyp er: A new approach for the recognition and p ositioning of two-dimensional ob jects.

IEEE Transactions on Pattern Analysis and Machine Intel ligence, 81:44{54, 1986.

[4] R. Deriche and O. Faugeras. Tracking line segments. Image and Vision Computing, 8:261{270, 1990.

[5] O. D. Faugeras and M. Heb ert. The representation, recognition, and lo cating of 3-d ob jects. International Journal of

Robotics Research, 53:27{52, 1986.

[6] O.D. Faugeras and S. Maybank. Motion from p oint matches: Multiplicity of solutions. International Journal of Computer

Vision, 4:225{246, 1990.

[7] D. Forsyth, L. Mundy, A. Zisserman, A. Heller, and C. Rothwell. Invariant descriptors fo 3-d ob ject recognition and p ose.

IEEE Transactions on Pattern Analysis and Machine Intel ligence, 13:971{991, Oct 1991.

[8] B.S. Garb ow, J.M. Boyle, J. Dongarra, and C.B. Moler. Matrix Eigensystem Routines { EISPACK Guide Extension,

volume 51 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1977.

[9] G.H. Golub and C.F. Van Loan. Matrix Computations. John Hopkins Press, Baltimore, 1989.

[10] W. Eric L. Grimson. Object Recognition by Computer: The Role of Geometric Constraints. MIT Press, Cambridge, MA,

1990.

[11] W. Eric L. Grimson and Tomas Lozano-Perez. Mo del-based recognition and lo calization from sparse range or tactile data.

International Journal of Robotics Research, 33:3{35, 1984.

[12] B. K. P. Horn. Robot Vision. McGraw-Hill, seventh edition, 1989.

[13] B. K. P. Horn. Relative orientation revisited. Journal of Optical Society of America, 810:1630{1638, 1991.

[14] D. P. Huttenlo cher and S. Ullman. Recognizing solid ob jects by alignment with an image. International Journal of

Computer Vision, 52:195{212, 1990.

[15] C. Jerian and R. Jain. Polynomial metho ds for structure from motion. IEEE Transactions on Pattern Analysis and

Machine Intel ligence, 1212:1150{1166, 1990.

[16] A. Kalvin, E. Schonb erg,J.T.Schwartz, and M. Sharir. Two-dimensional mo del-based b oundary matching using fo otprints.

International Journal of Robotics Research, 54:38{55, 1986.

[17] D. Koller, K. Daniilidis, and H.-H. Nagel. Mo del-based ob ject tracking in mono cular image sequences of road trac scenes.

International Journal of Computer Vision, 103:257{281, 1993.

[18] F.S. Macaulay. On some formula in elimination. Proceeding s of London Mathematical Society, pages 3{27, May 1902.

[19] D. Mano cha. Algebraic and Numeric Techniques for Modeling and Robotics. PhD thesis, Department of Electrical

Engineering and Computer Science, University of California, Berkeley, 1992.

[20] E. Paulos and J. Canny. Informed p eg-in-hole insertion using optical sensors. In SPIE Conference on Sensor Fusion VI,

1993. Boston Massachusetts.

50

[21] J. Ponce, A. Ho ogs, and D. J. Kriegman. On using cad mo dels to compute the p ose of curved 3d ob jects. CVGIP: Image

Understanding, 552:184{197, 1992.

[22] J. Ponce and D. J. Kriegman. Elimination theory and computer vision: Recognition and p ositioning of curved 3d ob jects

from range, intensity, or contours. In Symbolic and Numerical Computation for Arti cial Intel ligence, pages 123{146,

1992.

[23] D. Rosen. Errors in digital area measurement of regular 2d gures. In Vision Geometry II, pages 26{32, Septemb er 1993.

[24] G. Salmon. Lessons Introductory to the Modern Higher Algebra. G.E. Stechert & Co., New York, 1885.

[25] H. Schenck. Theories of Engineering Experimentation. McGraw-Hill Bo ok Company, 1979.

[26] J. T. Schwartz and M. Sharir. Identi cation of partially obscured ob jects in three dimensions by matching noisy charac-

teristic curves. International Journal of Robotics Research, 62:29{44, 1987.

[27] A. Wallack, J. Canny, and D. Mano cha. Ob ject lo calization using crossb eam sensing. In IEEE International Conference

on Robotics and Automation,volume 1, pages 692{699, May 1993.

[28] Z. Zhang and O. Faugeras. Building a 3d world mo del with a mobile rob ot: 3d line segment representation and integration.

In International Conference on Pattern Recognition, pages 38{42, 1990.

51

q

q q

P P P

1 1 1

2 2 2

^

Ob ject Resolution jPixelsj E rror x x^  y y^  

n n n

n n n

Hex-mo del 4.0 44.4 0.33446434 0.23129888 0.013330161 92.927956

Hex-mo del 2.0 82.4 0.07352662 0.049923003 0.0047521484 44.196507

Hex-mo del 1.0 157.6 0.041777074 0.006306609 6 0.0036297794 21.544956

Hex-mo del 0.5 309.5 0.0031298357 0.001603417 9 4.643562e-4 10.784967

Hex-mo del 0.25 614.6 0.0042416616 3.8894132e-4 1.0569962e-4 5.4200945

Letter \T" 4.0 49.2 0.5133025 0.60952026 0.021071866 103.38039

Letter \T" 2.0 90.4 0.13007967 0.25665605 0.00581296 48.95305

Letter \T" 1.0 174.3 0.054255202 0.124452725 0.0047051026 24.013884

Letter \T" 0.5 343.2 0.012289527 0.017529113 6.203042e-4 12.062334

Letter \T" 0.25 679.6 0.0040623806 0.003432825 2 1.9236103e-4 6.0177937

Numb er \6" 4.0 106.5 0.21361086 0.19730452 0.010275254 228.42712

Numb er \6" 2.0 205.8 0.054341987 0.049577143 0.004476396 112.03862

Numb er \6" 1.0 405.9 0.010724932 0.02340998 0.0010456733 56.430653

Numb er \6" 0.5 803.6 0.0059675984 0.004836324 1.6956787e-4 28.311207

Numb er \6" 0.25 1600.9 0.0017914316 0.001932060 2 7.470778e-5 14.09613

Fiducial 4.0 61.4 0.21314956 0.14251679 0.007586037 128.73416

Fiducial 2.0 120.0 0.057874706 0.07354973 0.0030514833 66.32908

Fiducial 1.0 237.4 0.029797971 0.02263394 8.284851e-4 33.25131

Fiducial 0.5 468.0 0.004916789 0.004921388 1.2238462e-4 16.175451

Fiducial 0.25 932.4 0.007628871 0.005095692 4 1.399027e-4 8.207249

Table 11: Average lo calization technique p erformance for various ob jects at various resolu- tions

52

q

q q

P P P

1 1 1

2 2 2

^

x x^  y y^   Ob ject Resolution jPixelsj E rror

n n n

n n n

Hex-mo del 4.0 12.2 0.8087669 0.74312437 0.021346115 11.319004

Corners

Hex-mo del 2.0 12.5 0.15569122 0.25277224 0.013486578 3.3304203

Corners

Hex-mo del 1.0 13.1 0.036884286 0.116834834 0.00448842 1.0500076

Corners

Hex-mo del 0.5 14.9 0.024275968 0.024562975 0.005042185 0.26821047

Corners

Hex-mo del 0.25 17.4 0.0063632606 0.004727341 0.002401636 4 0.09157471

Corners

Table 12: Average lo calization technique p erformance using only the corners of the hex-

mo del ob ject at various resolutions