Quantitative Analysis of Nearest-Neighbors Search in High-Dimensional Sampling-Based Motion Planning

Quantitative Analysis of Nearest-Neighbors Search in High-Dimensional Sampling-Based Motion Planning

Inter Workshop on Algorithmic Foundations of Robotics (WAFR), New York, NY, 2006 Quantitative Analysis of Nearest-Neighbors Search in High-Dimensional Sampling-Based Motion Planning Erion Plaku and Lydia E. Kavraki Department of Computer Science, Rice University {plakue, kavraki}@cs.rice.edu Abstract: We quantitatively analyze the performance of exact and approximate nearest-neighbors algorithms on increasingly high-dimensional problems in the con- text of sampling-based motion planning. We study the impact of the dimension, number of samples, distance metrics, and sampling schemes on the efficiency and accuracy of nearest-neighbors algorithms. Efficiency measures computation time and accuracy indicates similarity between exact and approximate nearest neighbors. Our analysis indicates that after a critical dimension, which varies between 15 and 30, exact nearest-neighbors algorithms examine almost all the samples. As a result, exact nearest-neighbors algorithms become impractical for sampling-based motion planners when a considerably large number of samples needs to be gener- ated. The impracticality of exact nearest-neighbors algorithms motivates the use of approximate algorithms, which trade off accuracy for efficiency. We propose a simple algorithm, termed Distance-based Projection onto Euclidean Space (DPES), which computes approximate nearest neighbors by using a distance-based projection of high-dimensional metric spaces onto low-dimensional Euclidean spaces. Our results indicate DPES achieves high efficiency and only a negligible loss in accuracy. 1 Introduction Research in motion planning has in recent years focused on sampling-based algorithms [1,5,9,13,14,17,20,22] for solving problems involving multiple and highly complex robots. Such algorithms rely on an efficient sampling of the configuration space and compute nearest neighbors for the sampled points. In general, the k nearest neighbors of a point in a data set are defined as the k closest points in the data set according to a distance metric. As research in motion planning progressively addresses problems of un- precedented complexity, nearest-neighbors computations based on arbitrary distance metrics and large high-dimensional data sets become increasingly challenging. Researchers have developed many nearest-neighbors algorithms, such as the kd-tree, R-tree, X-tree, M-tree, VP-tree, Gnat, iDistance, surveyed 2 Erion Plaku and Lydia E. Kavraki in [7,10,11], and others [3]. Analysis has shown that for certain distance met- rics and data distributions, the computational efficiency of such algorithms decreases as the dimension increases [4, 6, 12, 18, 23]. As summarized in [15], although the nearest neighbor of a point according to L2 from an n-point d- dimensional data set can be computed in O(dO(1) log n) time, the associated nO(d) space requirement is impractical. Reducing the space requirement to what is practical, i.e., O(dn), increases the query time to min(2O(d), dn). In fact, after a critical dimension, the brute-force linear method, which exam- ines the entire data set, is computationally faster than other exact nearest- neighbors algorithms. The work in [23] shows 610 as the theoretical bound d on the critical dimension for L2 and uniformly distributed points in [0, 1] . The experiments in [23] however indicate 10 as the critical dimension, a much lower estimate than the theoretical bound. In [4,12], the critical dimension is estimated between 10–15 for Lp and several synthetic and image data sets. Another viable approach for nearest-neighbors computations is to use ap- proximate algorithms which trade off accuracy for efficiency [2, 10, 19, 21], where accuracy indicates similarity between exact and approximate nearest neighbors. As summarized in [15], in the case of L2, approximate nearest neigh- bors can be computed probabilistically in dn1/1+ time and O(dn) space or O(1) deterministically in (d log n/)O(1) time and n1/ space. Such algorithms gain efficiency by projecting the data set onto low-dimensional spaces and achieve high accuracy when the projection results in low distortion of dis- tances. The computational advantages of approximate nearest-neighbors al- gorithms are more evident when the dimension d of the data set is high. The problem however remains challenging for general metrics. As summa- 2 rized in [16], any n-point metric space can be projected onto RO(log n) with only O(log n) distortion. However, solving high-dimensional motion planning problems requires generating millions of samples, which makes O(log2 n) im- practical. By increasing the distortion to O(n2/d log3/2 n), and thus reducing the accuracy, any n-point d-dimensional metric space could be projected onto Rd. Therefore, the efficiency or accuracy of approximate nearest-neighbors algorithms is typically reduced when general metrics are used instead of L2. The analysis of nearest-neighbors algorithms generally assume a uniform distribution of points and the use of L2. In motion planning, the distribution is impacted by the sampling scheme. Samples satisfy certain criteria, such as representing collision-free configurations, and, consequently, the distribution is usually non-uniform. Furthermore, distances between configurations are not necessarily defined by L2, but instead attempt to capture the success of the lo- cal planner. Motion planners therefore exhibit a degree of flexibility which can be exploited to compute approximate instead of exact nearest neighbors. Re- search in [22] shows that in certain high-dimensional problems using random neighbors actually improves the performance of motion planners. Therefore, an understanding of the impact of these factors on the efficiency and accuracy Quantitative Analysis of NN Search in Motion Planning 3 of nearest-neighbors algorithms employed by motion planners could provide valuable insight in addressing high-dimensional motion planning problems. In this work, we quantitatively analyze exact and approximate nearest- neighbors algorithms in the context of high-dimensional sampling-based mo- tion planning. We focus on roadmap-based algorithms, such as the Proba- bilistic RoadMap (PRM) method with uniform [17], bridge [13], Gaussian [5], and obstacle [1] sampling, and tree-based algorithms, such as the Rapidly- exploring Random Tree (RRT) [20] and the Expansive-Spaces Tree (EST) [14]. We address the following questions: (i) under what conditions, if any, should motion planners use the brute-force linear method instead of other exact nearest-neighbors algorithms? (ii) do approximate nearest-neighbors al- gorithms compute more efficiently nearest neighbors that are similar to exact nearest neighbors on high-dimensional motion planning problems? We study the impact of the dimension, number of samples, distance metrics, and sam- pling schemes on the efficiency and accuracy of nearest-neighbors algorithms. Our analysis indicates that after a critical dimension the brute-force linear method is computationally more efficient than other exact nearest-neighbors algorithms. The critical dimension however depends on the number of samples, distance metric, and sampling scheme. We present results that quantify these dependencies. Motivated by the impracticality of exact nearest-neighbors algorithms, we propose the use of approximate algorithms for the computation of neighbors in high-dimensional motion planning problems. In this work, we develop a simple algorithm, termed Distance-based Projection onto Euclidean Space (DPES), which computes approximate nearest neighbors by projecting high- dimensional metric spaces onto low-dimensional Euclidean spaces. The pro- jection is based on distances between a set of selected points and points in the data set. Our experiments indicate DPES achieves high computational efficiency and only a negligible loss in accuracy. The rest of the paper is organized as follows. In Sect. 2 we describe the methodology we use in our analysis and the DPES algorithm. In Sect. 3 we de- scribe the experimental setup. In Sect. 4 we present the results of our analysis of nearest-neighbors algorithms. We conclude in Sect. 5 with a discussion. 2 Methods In this section, we describe the nearest-neighbors algorithms we use in this paper including DPES. We also outline the motion planners and distance metrics we use in this study. We denote the data set, number of nearest ≥ neighbors, and distance metric by S, k, and ρ : S × S → R 0, respectively. 2.1 Exact k Nearest-Neighbors Algorithms We define the k nearest neighbors (kNN) of a point si ∈ S, denoted by NNS(si, k), as the k closest points to si from S − {si} according to ρ. 4 Erion Plaku and Lydia E. Kavraki Linear This is a brute-force approach which resolves NNS(si, k) by comput- ing the distance from si to each point in S. The Linear method provides the basis for the comparison with other more sophisticated kNN algorithms. Gnat Gnat [7] constructs a tree recursively by partitioning S into smaller sets and associating each set with a branch in the tree. Gnat then uses the triangle inequality to prune certain branches of the tree in order to compute kNN more efficiently. We choose Gnat in our analysis, since the results in [7] and our experiments in the context of motion planning with several kNN algorithms, such as kd-tree, M-tree, VP-tree, [3], etc., indicate Gnat to be more efficient especially on large data sets and metric spaces. 2.2 Approximate k Nearest-Neighbors Algorithms We define approximate k nearest neighbors (kANN) of a point si ∈ S, denoted by ANNS(si, k), as a subset of S − {si} of cardinality k that according to certain measures is similar to NNS(si, k). Random This method selects S0 ⊂ S uniformly at random, |S0| k, and computes ANNS(si, k) as NNS0 (si, k). Random provides a basis for evaluating the quality of other kANN algorithms. Distance-based Projection onto Euclidean Space (DPES) Our kANN m algorithm is based on projecting each point si ∈ S to a point v(si) ∈ R , for some fixed m > 0. We then use L2 to define distances between projected points and compute ANNS(si, k) as 0 0 ANNS(si, k) = {s : v(s ) ∈ NNV (S)(v(si), k)}, V (S) = {v(si) : si ∈ S}.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us