Efficient Nearest Neighbor Searching for Motion Planning
Total Page:16
File Type:pdf, Size:1020Kb
E±cient Nearest Neighbor Searching for Motion Planning Anna Atramentov Steven M. LaValle Dept. of Computer Science Dept. of Computer Science Iowa State University University of Illinois Ames, IA 50011 USA Urbana, IL 61801 USA [email protected] [email protected] Abstract gies of con¯guration spaces. The topologies that usu- ally arise are Cartesian products of R, S1, and projective We present and implement an e±cient algorithm for per- spaces. In these cases, metric information must be ap- forming nearest-neighbor queries in topological spaces propriately processed by any data structure designed to that usually arise in the context of motion planning. perform nearest neighbor computations. The problem is Our approach extends the Kd tree-based ANN algo- further complicated by the high dimensionality of con¯g- rithm, which was developed by Arya and Mount for Eu- uration spaces. clidean spaces. We argue the correctness of the algo- rithm and illustrate its e±ciency through computed ex- In this paper, we extend the nearest neighbor algorithm amples. We have applied the algorithm to both prob- and part of the ANN package of Arya and Mount [16] to abilistic roadmaps (PRMs) and Rapidly-exploring Ran- handle general topologies with very little computational dom Trees (RRTs). Substantial performance improve- overhead. Related literature is discussed in Section 3. ments are shown for motion planning examples. Our approach is based on hierarchically searching nodes in a Kd tree while handling the appropriate constraints induced by the metric and topology. We have proven the 1 Introduction correctness of our approach, and we have demonstrated the e±ciency of the algorithm experimentally. We ¯rst present experiments that demonstrate the performance Nearest neighbor searching is a fundamental problem in improvement over using linear-time naive nearest neigh- many applications, such as pattern recognition, statis- bor computations. The improvement is a few orders of tics, and machine learning. It is also an important com- magnitude in some cases. We also present experiments ponent in several path planning algorithms. Probabilis- that show substantial performance improvement in the tic roadmap (PRM) approaches [1, 10], build a graph PRM and RRT applied to di±cult path planning exam- of collision-free paths that attempts to capture the con- ples. nectivity of the con¯guration space. The vertices rep- resent con¯gurations that are generated using random sampling, and attempts are made to connect each ver- 2 Problem Formulation tex to nearby vertices. Some roadmaps contain thou- sands of vertices, which can lead to substantial computa- tion time for determining nearby vertices in some applica- The con¯guration space, C, which arises in motion plan- tions. Approaches based on Rapidly-exploring Random ning problems, is usually a non-Euclidean manifold. In Trees (RRTs) [12, 13, 14] rely even more heavily on near- the case of a 2D rigid body in the plane, the C = R2 £S1, est neighbors. An RRT is a tree of paths that is grown in which S1 represents a circle. In the case of a 3D rigid incrementally. In each iteration, a random con¯guration body in space, C = R3 £ P 3, in which P 3 denotes a is chosen, and the RRT vertex that is closest (with re- three-dimensional projective space. In the case of mul- spect to a prede¯ned metric) is selected for expansion. tiple bodies, the resulting con¯guration space is obtained An attempt is made to connect the RRT vertex to the by taking Cartesian products of copies of R, S1, and P 3. randomly-chosen state. These desired n-dimensional con¯guration spaces, and An approach that can e±ciently ¯nd nearest neighbors many others, can be represented by de¯ning a rectangular can dramatically improve the performance of these path subset of Rn, and identifying appropriate pairs of bound- planners. Although packages exist, such as ANN ([16], ary points to obtain the desired topology. For exam- U. of Maryland) and Ranger (SUNY Stony Brook), they ple, several two-dimensional manifolds can be obtained are designed for the common case of generating near- by identifying points on the unit square in the plane, as est neighbors in Rn. This cannot be applied to path shown in Figure 1. Arrows on a pair of opposite edges in- planning algorithms because of the complicated topolo- dicate identi¯cation of the opposite points on these edges. If arrows are drawn in opposite directions, then there is Other metric de¯nitions could be used in our approach; a twist in the identi¯cation. however, we preclude their consideration in this presen- tation. The problem is: given a d-dimensional manifold, T , and a set of data points in T , preprocess these points so that, for any query point q 2 T , the nearest data point to q cylinder torus projective plane can be found quickly. Figure 1: Some 2D manifolds obtained by identi¯cations of the rectangle boundary points. 3 Algorithm Presentation To solve the problem we use existing techniques for The manifolds of interest in this paper are constructed as searching for the nearest neighbors in Euclidean spaces. the Cartesian products of any of: We make modi¯cations to these techniques to handle topology of the space. First we precompute the data ² Euclidean one-space, represented by (0; 1) ½ R. structure for storing points, and then search this data structure when a query is given. ² Circle, represented by [0; 1], in which 0 » 1 by iden- ti¯cation. 3 Kd trees There has been a signi¯cant interest in near- ² P , represented by a three-dimensional unit cube est neighbors and related problems over the last couple with antipodal points identi¯ed. of decades. The Kd tree is a powerful data structure that based on recursively subdividing a set of points based on To de¯ne a nearest neighbor problem, it will be essential alternating axis-aligned hyperplanes. The classical Kd to de¯ne a metric on these manifolds. tree uses O(d lg n) precomputation time, and answers or- 1¡ 1 thogonal range queries in O(n d ). One of the ¯rst ap- pearances of the Kd tree is in [8]. A more recent introduc- De¯nition 2.1 The distance between two points p; q 2 tion appears in [6]. Improvements to the data structure 1 R is de¯ned as follows: and its construction algorithm in the context of nearest neighbor searching are described in [18]. In [4] it is shown distR(q; p) = jq ¡ pj: that using Kd trees for ¯nding approximate nearest neigh- bors allows signi¯cant improvement in running time with a very small loss in performance for higher dimensions. De¯nition 2.2 The distance between two points p; q 2 1 S is de¯ned as follows: Suppose that the data points lie in an d-dimensional box, which has some points identi¯ed on its boundary. We dist 1 (q; p) = min(jq ¡ pj; 1 ¡ jq ¡ pj): S build the data structure inside this box, and de¯ne it recursively as follows. The set of data points is split into De¯nition 2.3 The distance between two points q; p 2 two parts by splitting the box containing them into two 3 P , in which q = (q1; q2; q3) and p = (p1; p2; p3), is de- child boxes by a plane, according to some splitting rule ¯ned as follows: speci¯ed by the algorithm; one subset contains the points in one child box, and another subset contains the rest of the points. The information about the splitting plane and distP 3 (q; p) = mini=1::8(di): the boundary values of the initial box are stored in the d1 = distR3 (q; p); root node, and the two subsets are stored recursively in 3 2 the two subtrees. When the number of the data points d2 = q( j=1 (1 ¡ jqj ¡ pj j) ; P contained in some box falls below a given threshold, then d3 4 = distR3 ((¡q1; ¡q2; q3 § 1); p); ; we call a node associated with this box a leaf node, and d5 6 = distR3 ((¡q1; q2 § 1; ¡q3); p); ; we store a list of coordinates for these data points in this d7 8 = distR3 ((q1 § 1; ¡q2; ¡q3); p): ; node. We use dividing rules suggested by [16], which divides De¯nition 2.4 For manifolds, T1 and T2, if q; p 2 T1 £ T2 then the distance between these two points is de¯ned the current cell through its midpoint orthogonal to its as follows: longest side. If there are ties, it selects the dimension with longest point spread. However, in the case in which 2 2 points are all on one side of the splitting plane, then the dist £ (q; p) = dist (q; p) + dist (q; p): T1 T2 q T1 T2 algorithm slides the plane toward the ¯rst encountered KDTree::SEARCH(q) Output the closest to q point p stored in Kd tree 1 Calculate squared distance d from the box associated with the root node to q 2 p = NULL 3 root¡ >SEARCH(d, 1, p) 4 return p Node::SEARCH(d, &dbest, &p) Input squared distances d, from q to the box containing current Node and dbest, from q to the closest to it point p seen so far, dbest and p are to be updated a. 1 if d < dbest 2 Split bK (the projection of the current node onto the topological space TK , stored in this node) into two subboxes, bK1 and bK2 , by the splitting line l, corresponding to v1 and v2 respectively. 2 3 d1 = distTK (q; bK1 ) 2 4 d2 = distTK (q; bK2 ) 5 if d1 < d2 6 then v1¡ >SEARCH(d, dbest, p) 7 v2¡ >SEARCH(d ¡ d1 + d2, dbest, p) else v2 > d d p b.