<<

Solving Using Linear

Michael Peretzian Williams

uadric intersection is a common class of nonlinear systems of equations. Quadrics, which are the class of all degree-two in three or more variables, appear in many engineering problems, such as multilateration. Typically, numerical methods are used to solve such problems. Unfortunately, these methods require an initial guess and, although rates of convergence are well understood, convergence is not necessarily certain. The method discussed in this article transforms the problem of simultaneously solving a system of polynomials into a problem that, unlike other root-finding methods, does not require an initial guess. Additionally, iterative methods only give one solution at a time. The method outlined here gives all solutions (including complex ones).

INTRODUCTION Multivariate polynomials show up in many applications. Polynomials are attractive because they are well understood and they have significant simplicity and structure in that they are vector spaces and rings. Additionally, degree-two polynomials (conic sections that are also known as quadrics) show up in many engineering applications, including multilateration. This article begins with a discussion on finding roots of univariate polynomials via eigenvalues/eigenvectors of companion matrices. Next, we briefly introduce the concept of a to enable us to discuss ring representation of polynomials that are a generalization of companion matrices. We then discuss how multivariate polynomial systems can be solved with ring representations. Following that, we outline an algo- rithm attributable to Emiris1 that is used to find ring representations. Finally, we give an example of the methods applied to a trilateration quadric-intersection problem.

354 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, 4 (2010) COMPANION MATRICES: FINDING ROOTS OF UNIVARIATE POLYNOMIALS AS AN EIGENVALUE/EIGENVECTOR PROBLEM

Recall that by definition an n × n M has eigenvectors vi and eigenvalues i if Mvi = ivi. Such a matrix can have at most n eigenvalues/eigenvectors. Solving for i is equiva- lent to solving det(M – xI) = 0. This calculation yields a polynomial in x called the “characteristic polynomial,” the roots of which are the eigenvalues i. Once the i values are obtained, the eigenvectors vi can be calculated by solving (M – iI)vi = 0. Hence, it is not surprising that eigenvalues/eigenvectors provide a method for solving polynomials. n Next, we consider the univariate polynomial p(x) = C xi , where C = 1. Αi = 0 i n The matrix 0 1 L 0 MM OM M = 0 0 L 1 L −CC0 − 1 −Cn−1

is known as the “companion matrix” for p. Recall that the is defined as 1 1 L 1 x x O x V(, x x,,K x ) = 1 2 n . 1 2 n MM LM n−1 n − 1 L n− 1 x1 x2 xn

As it turns out, the roots of the characteristic polynomial of M [i.e., det(M – xI) = 0] are precisely the roots of the polynomial p, and the eigenvectors are Vandermonde vectors of the roots. ␭ Indeed, if j is a root of p [i.e., p(j) = 0] and v is a Vandermonde vector such that ␭ L ␭n− 1 T v = [1 j j ] , then ␭ ␭ ␭ j j j 1 0 1 L 0 1 ␭2 ␭2 ␭2 ␭2 MM OM ␭jj j j j j Mv = = = = = ␭ = ␭ v . 0 0 L 1 M M M M j M j −CC − L − C n− 1 n − 1 ␭n − p()␭ ␭n − 0 ␭n− 1 0 1 n− 1 ␭ − C ␭i j j j j j Α i j i = 0 Hence, the eigenvalues of M are the roots of p. Likewise, the right eigenvectors of M are the columns of the Vandermonde matrix of the roots of p. The most important property of companion matrices in this article can be stated as follows: Given a polynomial p, the companion matrix defines a matrixM such that the characteristic polynomial of M is p [i.e., det(M – xI) = ±p(x)].

To give a better understanding of the above statement, consider the following con- crete example. Let p(x) = (x – 2)(x – 3)(x – 5) = x3 – (2 + 3 + 5)x2 + (2 · 3 + 2 · 5 + 3 · 5)x – 2 · 3 · 5 = x3 – 10x2 + 31x – 30 We can verify (for this case) that the eigenvalue/eigenvector statements made above hold [i.e., det(M – xI) = ±p(x)]. The companion matrix for p(x) is

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) 355­­­­ M. P. WILLIAMS

0 1 0 M = 0 0 1 . 30 −31 10

Since the roots of p are (2, 3, 5) by construction, the eigenvectors of M should be given by the Vandermonde matrix

1 1 1 V(2, 3, 5) = 2 3 5 . 4 9 25

We observe that

0 1 0 1 2 2 1 0 0 1 2 = 4 = 4 = 2 2 30 − 31 10 4 30 −2 · 31 + 4· 10 8 4

0 1 0 1 3 3 1 0 0 1 3 = 9 = 9 = 3 3 30 − 31 10 9 30 −3· 31 + 9·10 27 9

0 1 0 1 5 5 1 0 0 1 5 = 25 = 25 = 5 5 . 30 − 31 10 25 30 −5· 31 + 25· 10 125 25

Hence, the eigenvalues of M are given by  = 2, 3, 5, the roots of p(x) as claimed. Also, we note that p is the characteristic polynomial for M [i.e., det(M – xI) = ±p(x)]. Indeed,  −␭ 1 0  det(MI−␭ ) = det 0− ␭ 1  30 −31 10 − ␭  − ␭ 1 0 1 = −␭det   − 1det  + 0det  −␭ 0   −31 10 − ␭   30 10 − ␭   − 31 30  = −␭(− ␭()10 −␭ + 31)+ 30 = −␭3+10 ␭ 2 − 31␭ +30 = −p()␭ .

Hence, det(M – xI) = ±p(x) as claimed. In this section, we have outlined univariate examples of the companion matrix and Vandermonde matrix. Indeed, we have demonstrated that finding the roots of a univariate polynomial is an eigenvalue/eigenvector problem. In a later section, we discuss the u- , which is used to construct a generalization of the companion matrix. These generalizations, known herein as “ring representations,” can be used to solve polynomial systems, including quadric intersection.

RINGS AND RING REPRESENTATIONS Previously, we discussed transforming univariate polynomial root finding into an eigenvalue/eigenvector problem. For the remainder of this article, we will generalize the method above to simultaneously solve systems of multivariate polynomial equa- tions. More precisely, we want to find coordinates x( 1, x2, . . . , xn) such that for poly- nomials f1, f2, . . . , fn

356 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) SOLVING POLYNOMIAL EQUATIONS USING LINEAR ALGEBRA

K f1(, x 1 x2,, xn)= 0 K f2(, x 1 x2,, xn)= 0 M K fn(, x1x 2,, xn)= 0 .

As with many multivariate generalizations, the univariate case does not require understanding of the deeper structure inherent to the problem. The last section is no exception. A brief discussion regarding the algebraic concept of a ring is necessary to advance the generalization. Rings are an algebraic structure having both a and but not necessarily a . The most common rings are , univariate polynomials, and multivariate polynomials. For any a, b, c in a ring R, the following properties hold. Addition: A. Closure, a + b are in R B. Associativity, (a + b) + c = a + (b + c) C. , a + 0 = 0 + a = a D. Inverse, a + (–a) = (–a) + a = 0 E. Commutativity, a + b = b + a Multiplication: A. Closure, ab is in R B. Associativity, (ab)c = a(bc) C. Identity, a1 = 1a = a (optional) D. Inverse, a(a–1) = (a–1)a = 1 (optional) E. Commutativity, ab = ba (optional) Algebra, the of mathematics in which rings are studied, commonly looks at an operation called adjoining, or including new elements. If R is a ring, then adjoining is accomplished by including some new element j and creating the ring R[ j] in the following 2 3 fashion: R[ j] = a0 + a1j + a2j + a3j . . . for all a0, a1, a2, a3, . . . in R. A common instance of this operation is adjoining the imaginary i to the real to obtain the complex numbers. Because matrices are well understood and easily implemented in code, ring rep- resentations (a special family of matrices particular to a ring) are a common way of handling algebraic applications. In the same way that logarithms map multiplication to addition (log(ab) = log(a) + log(b)) and Fourier transforms map to mul- tiplication ([f * g] = [f ][g]), ring representations map ring multiplication and ring addition to and addition. There are many ways to view a matrix. It can be, and typically is, viewed as a transfor- mation whose eigenvalues could be adjoined to a ring if they were not present. Alterna- tively, because p(M) = 0 (where p is the characteristic polynomial of M), the matrix can itself be viewed as an element that can be adjoined because matrices are roots of their characteristic polynomial. Let R be any ring. The mapping to adjoin a matrix M to the 2 n ring R obtaining the ring R[M] is as follows: R[M] = u0I + u1M + u2M + · · · + unM for all u0, u1, u2, . . ., un  R. 2 n Each element u0I + u1M + u2M + · · · + unM in R[M] is a ring representation of the 2 n element u0 + u1 + u2 + · · · + un in R[]. For a concrete example, consider the complex numbers, which form a ring as they satisfy all of the properties above. To cast the complex numbers as ring representa- tions over the real numbers, consider the characteristic polynomial of the complex numbers, i.e., p = x2 + 1 = x2 + 0x + 1 = 0. In order to form a ring representation, we are interested in adjoining a matrix whose eigenvalues are the roots of p. The com- panion matrix of this polynomial, as discussed above, will provide just such a matrix. The companion matrix is

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) 357­­­­ M. P. WILLIAMS

J = 0 1 . − 1 0 Indeed,  0 1   − ␭ 1  det(JI−␭ ) = det − ␭I = det =␭2 + 1,.␭= ±i  −1 0   −1 − ␭ 

We can now define a ring representation of the complex numbers as rep(u0 + u1i) = u0I + u1J for all u0, u1  R. More concretely, we have

u0 u 1 rep() u0+u 1 i = . − u1 u 0 This construction preserves addition. Indeed,

u0 u 1 v0 v 1 rep() u0+u 1 i + rep() v0+v 1 i = + − u1 u 0 − v1 v 0 u+ v u + v = 0 0 1 1 −()u1 + v 1 u0+ v 0

=rep(( u0 + v 0)(+ u1 + v 1)) i . Also, the construction preserves multiplication as

u0 u 1 v0 v1 rep() u0+u 1 i rep() v0 + v 1 i = − u1 u 0 − v1 v 0 u v − u v u v + u v = 0 0 1 1 0 1 1 0 −(u0 v 1 + u1v0) u 0v0− u 1v1

=rep(( u0v 0 − u1v 1)(+ u0v 1 + u1 v 0))i

=rep(( u0 + u 1 i)(v0+ v 1i)). Remarkably, this process can be generalized to include the roots of several multivari- ate polynomials.

MOTIVATION FOR USING RING REPRESENTATIONS As stated in the Companion Matrices section, one interpretation of the companion matrix is as an answer to the inverse eigenvalue problem. Namely, given a polynomial p, find a matrixM such that p is the characteristic polynomial of M. We will show that ring representations are a multivariate generalization of companion matrices. In this section, we will motivate discussion of Emiris’ algorithm for finding ring representa- tions with an example. Consider the following system of equations: 2 2 f1 = x + y – 10 = 0 2 2 f2 = x + xy + 2y – 16 = 0. Let R now be the real numbers. These above equations are elements in the poly- nomial ring R[x, y]. If we could find ring representations X and Y for x and y, respec- tively, in a ring where f1 = f2 = 0, the following would be true by the properties of ring representations: 2 2 f1 = X + Y – 10I = 0 2 2 f2 = X + XY + 2Y – 16I = 0.

358 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) SOLVING POLYNOMIAL EQUATIONS USING LINEAR ALGEBRA

Furthermore, because the ring defined by the equations is commutative, X and Y would necessarily commute. As a result, if X and Y are nondefective (have distinct eigenvalues), then they are simultaneously diagonalizable (they can be diagonalized with the same or, equivalently, they share all of their eigenvectors). Conse- quently, the diagonal of the representations must be the coordinates of the roots. If V is the matrix of eigenvectors for X and Y, then the statements above can be expressed symbolically as follows:

VX−1( 2 +YI2 − 10 )V = 0 VX−1() 2 VV+−1()YV 2 − VI−1 ()10 V = 0

2 2 x1 0 0 0 y1 0 0 0 10 0 0 0 0 0 0 0 0x 0 0 0y 0 0 0 10 0 0 0 0 0 0 2 + 2 − = 0 0 x3 0 0 0 y3 0 0 0 10 0 0 0 0 0

0 0 0 x4 0 0 0 y4 0 0 0 10 0 0 0 0

2 2 x1 + y1 − 10 0 0 0 0 0 0 0 2 2 0 0 0 0 0 x2 + y2 − 10 0 0 2 2 = . 0 0 x3 + y3 − 10 0 0 0 0 0 2 2 0 0 0 x4 + y4 − 10 0 0 0 0

Similarly,

VX−1( 2 +XY + 2YI2 − 16 )V = 2 2 x1 + x1y 1 + y1 − 16 0 0 0 0 0 0 0 2 2 0 x2 + x2y 2 + y2 − 16 0 0 0 0 0 0 2 2 = . 0 0 x3 + x3y 3 + y3 − 16 0 0 0 0 0 2 2 0 0 0 0 0 0 0 x4 + x4y 4 + y4 − 16

Indeed, it turns out for this example that ring representations for x and y are as follows:

0 1 0 0 0 0 1 0 X = 4 0 0 1 , Y = 0 0 0 1 . 0 0 0 1 6 0 0 −1 0 3 2 0 0 3 −2 0

It is easy to verify

2 2 f1 = X + YI− 10 2 2 0 1 0 0 0 0 1 0 −10 0 0 0 4 0 0 1 0 0 0 1 0− 10 0 0 = + + 0 0 0 1 6 0 0− 1 0 0 −10 0 0 3 2 0 0 3 −2 0 0 0 0 −10

4 0 0 1 6 0 0− 1 −10 0 0 0

JOHNS HOPKINS APL TECHNICAL DIGEST,0 VOLUME 7 2 28, NUMBER 0 4 (2010)0 3 −2 0 0− 10 0 0 = + + 359­­­­ 0 3 2 0 0− 3 8 0 0 0 −10 0 12 0 0 5 −12 0 0 5 0 0 0 −10

0 0 0 0 0 0 0 0 = . 0 0 0 0 0 0 0 0 2 2 f1 = X + YI− 10 2 2 0 1 0 0 0 0 1 0 −10 0 0 0 4 0 0 1 0 0 0 1 0− 10 0 0 = + + 0 0 0 1 6 0 0− 1 0 0 −10 0 M. P. WILLIAMS 0 3 2 0 0 3 −2 0 0 0 0 −10

4 0 0 1 6 0 0− 1 −10 0 0 0 0 7 2 0 0 3 −2 0 0− 10 0 0 = + + 0 3 2 0 0− 3 8 0 0 0 −10 0 12 0 0 5 −12 0 0 5 0 0 0 −10

0 0 0 0 0 0 0 0 = . 0 0 0 0 0 0 0 0

Similar calculations can be done for f2. The eigenvalues for X and Y that should yield the coordinates of the roots of the polynomial system are ␭ ␭ X =(, ± 8± 1) and Y = (± 2, m3). Indeed,

()±82 + ()± 2 2 − 10 = 0 ()±82 + ()± 8 () ± 2+ 2() ± 22 − 16 = 0 ()±1 2 + ()m 32 − 10= 0 ()±12 + ()± 1 ()m3 + 2() m32 − 16 = 0 .

EMIRIS’ ALGORITHM We have established that, if we have a way of determining the ring representa- tions for X and Y, we can solve the system of equations. Emiris’ algorithm, a method to accomplish just that, is discussed notionally here. For a full proof and description see Ref. 1 or 2. Given polynomials f1, f2, . . . , fn each of degree d1, d2, . . . , dn, respectively, the algo- rithm is as follows:

1. Introduce an extra polynomial f0 = u0 + u1x1 + u2x2 + . . . + unxn, where u0, u1, . . . , un are arbitrarily chosen constants with at least one being non-zero i i i 1 2 KKn K 2. Consider B= { x1x 2 xn |i0+ i 1 +in = D, 0 Յ i0 , i 1, , in Յ D}, where D = d1 + d2 + . . . + dn – n + 1 3. Consider

dn Sn = { bʦ B | xn dividesb }

dn−1 dn Sn− 1 = { bʦ B | xn− 1 dividdes bbut xn does not} d d ʦ n n−1 Sn− 2 = { b B |divides b but xn ,xn−1 do not} M d d d d 1 n n−1 K 2 S1 = { bʦ B | x1 divides bbut xn ,,xn−1, x 2 do not} n ʦ S0 = b B| b  U Si Ά·i= 1

4. Find M a linear transformation on the in B such that

360 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) SOLVING POLYNOMIAL EQUATIONS USING LINEAR ALGEBRA

v S0() x v v S() x B(, x x,,K x )(=B x) = 1 1 2 n M v Sn() x v v v S() x f() x S0() x 0 0 v vd v v S() x ((S x)/ x1 )( f x) M · B()x= M 1 = 1 1 1 M M v v d v S() x n n ((Sn x))/ xn )( fn x)

5. Consider v v v v v S() p f() p S0() p S0() p f 0() p 0 0 v v d v v S() p ((S p)/ p 1 )(f p) 0 Mv(,p p ,,K p )(=M · B p) = M 1 = 1 1 1 = 1 2 n M M M v v v dn Sn() p ((Sn p)/ p n )( fn p) 0

6. Break M into blocks as follows: v v v v v S() p f() p S0() p S0() p f 0() p 0 0 v v d v v MM S() p ((S p)/ p 1 )(f p) 0 M · B()p = 00 01 1 = 1 1 1 = MM M M M 10 11 v v v dn Sn() p ((Sn p)/ p n )( fn p) 0

7. Block triangularlize with the following transformation: v v S() p S() p 0 v 0 v v v v v IM− M−1 MM S() p M% 0 S() p IM− M−1 f() p S ()p f ()p S ()p 01 11 00 01 1 = = 1 = 01 11 0 0 = 0 0 0 I MM M MMM M 0 I 0 0 10 11 v 10 11 v Sn() p Sn() p %% K −1 8. As a result of the above, MM= (,u1 u 2,, un) = M00 − MM01 11 M10 9. Solve the following eigenvector problem: v v v % K K MS0()p= f 0()p S0()p = ( u 0 +u1 p 1 + + un p n) S0( p 1,, p2,, pn) T 10. Since S0(p1, p2, . . . , pn) = [a, ap1, ap2, . . . , apn, ap1p2, ap1p3, . . .] (because it will be derived numerically), extracting the root coordinates can be done by dividing the eigenvector by a and collecting the 2nd through nth entries

The roots of the polynomial system are the coordinates (p1, p2, . . . , pn).

APPLICATION One application for Emiris’ algorithm is trilateration. Trilateration is achieved by measuring the time difference of arrival between pairs of sensors. The so-called “TDOA” (time difference of arrival) is ||x – S1|| – ||x – S2|| = ±TDOA * C, where S1 and S2 are sensors, x is an emitter position, TDOA is the time difference of arrival, and C is the emission propagation rate. It is well known that this can be repre- sented as a polynomial (indeed, it is hyperbolic). Thus, the polynomial that describes the level above is given by

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) 361­­­­ M. P. WILLIAMS

x −S1 − x −S 2 = ±TDOA * C x + [(S1 + S2)/2 + (S1 – S2)/2] −x +[(S1 + S2)/2 – (S1 – S2)/2] = ±TDOA * C

Let u =()SS1 + 2 / 2 v =()S1 − S 2 / 2 y= x + u TDOA * C = 2␦ The equation above becomes y+ v − y− v = ±2␦ y+ v = ±2␦ + y− v 2 2 y+ v = 4␦2 ± 4 ␦ y − v +y − v yTT y +2y v + vTv =4␦2 ± 4 ␦ y −v + yTy−2 yTTv + v v 4y · v= 4␦2 ± 4 ␦ y− v y · v − ␦2 = ± ␦ y − v 2 ()yT v 2−2␦ 2yT v + ␦4=␦ 2 y − v ()yT v 2− 2␦ 2yT v +␦4 = ␦ 2yT y −2␦2yTT v + ␦ 2v v ()yT v 2+␦ 4 = ␦ 2yT y + ␦2vT v yTT vv y +␦4 = ␦ 2yTT y + ␦2v v yTT() vv−␦2 I y − ␦ 2 vTv +␦4 = 0 As an example, let’s consider the problem of locating a gunshot within a city. Suppose there are three acoustic sensors (S1, S2, and S3) at (9, 39), (65, 10), and (64, 71), respectively, in a local Cartesian coordinate system with meters as the dis- tance unit. Suppose further that a gunshot is heard by S1 at 19:19:57.0875, by S2 at 19:19:57.1719, and by S3 at 19:19:57.1797, and the speed of sound (C) is 341 m/s. Also suppose the unknown emitter location is at (27, 42) and that it emitted at 19:19:57:0340 (see Fig. 1). The time difference of arrival between S1 and S2 is 0.0844 and between S1 and S3 it is 0.0078. Hence, we wish to solve the system

()x−92 + () y − 392 − ()x− 652 + () y − 10 2 = ±0.0844*341 (x −9)(2 + y − 39)(2 − x − 64)(2 + y − 71)2 = ±0.0078*341.

Equivalently, we solve − 207485.6417 − 19846.0765x + 31843..375y + 537 0281x2 −812 xy − 36.7219 y2 = 0 2480777.5687− 88508.52223x − 37529.9994y + 549.4318x2 +880 xy + 49.1818 y2 = 0 .

To apply Emiris’ algorithm, we pick f0 = u0 + u1x + u2y = 8 + 28x + 80y and calcu- late the ring representation F0 = u0 + u1X + u2Y = 8 + 28X + 80Y:

8 28 80 0 –48618.3988 2548.0324 –112.9542 84.5799 F = . 0 –2483454.438 62895.7242 64660.2744 –1549.6064 –73249499.44 1969279.3923 1768850.5654 –43682.6736

362 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) SOLVING POLYNOMIAL EQUATIONS USING LINEAR ALGEBRA

100 Known: S1 = (9, 39) 100 Known: S1 = (9, 39) 19:19:57:1797 Known: S2 = (65, 10) Known: S2 = (65, 10) Known: S3 = (64, 71) Known: S3 = (64, 71) 80 19:19:57:1719 Unknown: P = (27, 42) 80 Found: P = (27, 42) Sensor 3 Sensor 3 19:19:57:0875 60 60 19:19:57:034 Sensor 1 y y 40 Emitter (27, 42) 40 Sensor 1 Emitter (27, 42)

20 20 Sensor 2 Sensor 2 0 0

–20 –20 –40 –20 0 20 40 60 80 100 120 140 160 –40 –20 0 20 40 60 80 100 120 140 160 x x 1 1 1 1 1 x 44.1845 25.9777 27 70.1086 Figure 1. Emitter and sensor locations. y –100.0836 254.0739 42 39.2354 xy –4422.1484 6600.2544 1134 2750.7347

Figure 2. Emitter and sensor locations and eigenvector solutions.

The eigenvectors of F0 are 1 1 1 1 25.9777 44.1845 27 70.1086 V = . 254.0739 –100.0836 42 39.2354 6600.2544 –4422.1484 1134 2750.7347 The second and third rows are the solutions, and we have recovered the emitter loca- tion at (27, 42), as seen in Fig. 2.

CONCLUSION We have discussed some eigenvector methods for finding the roots of multi- variate polynomials. Unlike iterative, numerical methods typically applied to this problem, the methods outlined in this article possess the numerical stability of numer- ical linear algebra, do not require a good initial guess of the solution, and give all solu- tions simultaneously. Furthermore, if the initial guess is poor enough, the methods outlined herein may converge more quickly than iterative methods.

REFERENCES 1Emiris, I. Z., “On the Complexity of Sparse Elimination,” J. Complexity 12, 134–166 (1996). 2Cox, D., Little, J., and O’Shea, D., Using Algebraic , Graduate Texts in Mathematics Series, Vol. 185, Springer, New York, 2nd Ed., pp. 122–128 (2005).

Michael Peretzian Williams received his B.S. in Mathematics in 1998 from the University of South Carolina. He earned his M.S. in Mathematics in 2001, again from the University of South Carolina, focusing on Number Theory. He received his Ph.D. in Mathematics in 2004 from North Carolina State University, working in Lie . From 2005 to 2007, Dr. Williams worked as a researcher for Northrop Grumman, developing for Dempster–Shafer fusion. He has The Authorbeen with APL’s Global Engagement Department since 2007, and his current work is in upstream Michael P. Williams data fusion. His e-mail address is [email protected].

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 28, NUMBER 4 (2010) 363­­­­