NG-DBSCAN: Scalable Density-Based Clustering for Arbitrary Data

NG-DBSCAN: Scalable Density-Based Clustering for Arbitrary Data

NG-DBSCAN: Scalable Density-Based Clustering for Arbitrary Data Alessandro Lulli1;2, Matteo Dell’Amico3, Pietro Michiardi4, Laura Ricci1;2 1University of Pisa, Italy 2ISTI, CNR, Pisa, Italy 3Symantec Research Labs, France 4EURECOM, Campus SophiaTech, France 1flulli, [email protected] 3matteo [email protected] [email protected] ABSTRACT time unless very significant breakthroughs can be made in theoreti- We present NG-DBSCAN, an approximate density-based cluster- cal computer science. As data size grows, this complexity becomes ing algorithm that operates on arbitrary data and any symmetric difficult to handle; for this very reason, Gan and Tao proposed an distance measure. The distributed design of our algorithm makes it approximated, fast single-machine DBSCAN implementation for scalable to very large datasets; its approximate nature makes it fast, points in the Euclidean space. yet capable of producing high quality clustering results. We pro- Several distributed DBSCAN implementations exist [6, 14, 16, vide a detailed overview of the steps of NG-DBSCAN, together 25]: they partition the feature space, running a single-machine with their analysis. Our results, obtained through an extensive ex- DBSCAN implementation on each partition, and then “stitch” the perimental campaign with real and synthetic data, substantiate our work done on the border of each partition. As our results show, this claims about NG-DBSCAN’s performance and scalability. approach is effective only when dimensionality is low: with large dimensionalities, the amount of work to connect each partition’s results becomes unbearably large. 1. INTRODUCTION As we discuss in Section 2, the definition of DBSCAN itself Clustering algorithms are fundamental in data analysis, provid- simply requires a distance measure between items; a large majority ing an unsupervised way to aid understanding and interpreting data of existing implementations, though, only consider data as points by grouping similar objects together. With DBSCAN, Ester et in a d-dimensional space, and only support Euclidean distance be- al. [9] introduced the idea of density-based clustering: grouping tween them. This is the case for the distributed implementations data packed in high-density regions of the feature space. DB- referred to above, which base the partitioning on the requirement SCAN is very well known and appreciated (it received the KDD that pieces of data are points in an Euclidean space. This is incon- test of time award in 2014) thanks to two very desirable features: venient in the ever more common case of heterogeneous data sets, first, it separates “core points” appearing in dense regions of the where data items fed to the clustering algorithm are made of one feature spaces from outliers (“noise points”) which are classified as or more fields with arbitrary type: consider, for example, the case not belonging to any cluster; second, it recognizes clusters on com- of textual data where edit distance is a desireable measure. Fur- plex manifolds, having arbitrary shapes rather than being limited to thermore, the computational cost of the last “stitching” step grows “ball-shaped” ones, which are all similar to a given centroid. quickly as the number d of dimensions increases, even if the intrin- Unfortunately, two limitations restrict DBSCAN’s applicability sic data dimensionality remains low. to increasingly common cases: first, it is difficult to run it on very The typical way of coping with such limitations is extracting large databases as its scalability is limited; second, existing imple- an array of numeric features from the original data: for example, mentations do not lend themselves well to heterogeneous data sets textual data is converted to a vector via the word2vec [5] algo- where item similarity is best represented via arbitrarily complex rithm. Then, distance between these vectors is used as a surrogate functions. We target both problems, proposing an approximated, for the desired distance function between data. Our proposal, NG- scalable, distributed DBSCAN implementation which is able to DBSCAN (described in Section 3), gives instead the flexibility of handle any symmetric distance function, and can handle arbitrary specifying any simmetric distance function on the original data. data items, rather than being limited to points in Euclidean space. Recognizing from the contribution of Gan and Tao that computing Ester et al. claimed O (n log n) running time for data in d-di- DBSCAN exactly imposes limits to scalability, our approach com- mensional Euclidean spaces, but Gan and Tao [12] recently proved putes instead an approximation to the exact DBSCAN clustering. 4=3 this claim wrong: for d > 3, DBSCAN requires at least Ω(n ) Rather than partitioning an Euclidean space – which is impossi- ble with arbitrary data and has problems with high dimensionality, as discussed before – our algorithm is based on a vertex-centric design, whereby we compute a neighbor graph, a distributed data structure describing the “neighborhood” of each piece of data (i.e., This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License. To view a copy a set containing its most similar items). We compute the clusters of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For based on the content of the neighbor graph, whose acronym gives any use beyond those covered by this license, obtain permission by emailing the name to NG-DBSCAN. [email protected]. NG-DBSCAN is implemented in Spark, and it is suitable to be Proceedings of the VLDB Endowment, Vol. 10, No. 3 ported to frameworks that enable distributed vertex-centric compu- Copyright 2016 VLDB Endowment 2150-8097/16/11. 157 tation; in our experimental evaluation we evaluate both the scala- implementations only evaluated DBSCAN for rather limited num- bility of the algorithm and the quality of the results, i.e., how close bers of points – and proposed an approximate algorithm, ρ-DB- these results are to those of an exact computation of DBSCAN. We SCAN, running in O (n) time. Unfortunately, the data structure compare NG-DBSCAN with competing DBSCAN implementa- at the core of ρ-DBSCAN does not allow handling arbitrary data tions, on real and synthetic datasets. All details on the experimental or similarity measures, and only Euclidean distance is used in both setup are discussed in Section 4. the description and experimental evaluation. Our results, reported in Section 5, show that NG-DBSCAN of- We remark that the definition of DBSCAN revolves on the abil- ten outperforms competing DBSCAN implementations, while the ity of finding the "-neighborhood of each data point: as long as a approximation imposes small or negligible impact on the results. distance measure is given, the "-neighborhood of a point p is well- Furthermore, we investigate the case of clustering text based on a defined no matter what the type of p is. NG-DBSCAN does not word2vec embedding: we show that – if one is indeed interested impose any limitation on the type of data points nor on the proper- in clustering text based on edit-distance similarity – in the existing ties of the distance function, except symmetry. approaches the penalty in terms of clustering quality is substantial, unlike what happens with the approach enabled by NG-DBSCAN. 2.2 Distributed Density-Based Clustering We consider that this line of research opens the door to several MR-DBSCAN [14] is the first proposal of a distributed DB- interesting and important contributions. With the concluding re- SCAN implementation realized as a 4-stage MapReduce algorithm: marks of Section 6, we outline our future research lines, including partitioning, clustering, and two stages devoted to merging. This an adaptation of this approach to streaming data, and supporting approach concentrates on defining a clever partitioning of data in regression and classification using similar approaches. a d-dimensional Euclidean space, where each partition is assigned Summary of Contributions. NG-DBSCAN is an approximated to a worker node. A modified version of PDBSCAN [31], a pop- and distributed implementation of DBSCAN. Its main merits are: ular DBSCAN implementation, is executed on the sub-space of each partition. Nodes within distance " from a partition’s border • Efficiency. It often outperforms other DBSCAN distribu- are replicated, and two stages are in charge of merging clusters be- ted implementations, while the approximation has a small to tween different partitions. Unfortunately, MR-DBSCAN’s evalu- negligible impact on results. ation does not compare it to other DBSCAN implementations, and only considers points in a 2D space. • Versatility. The vertex-centric approach enables distribu- In the Evaluation section, we compare our results to SPARK- tion without needing Euclidean spaces to partition. NG-DB- DBSCAN and IRVINGC-DBSCAN, two implementations in- SCAN allows experts to represent item dissimilarity through spired by MR-DBSCAN and implemented in Apache Spark. any symmetric distance function, allowing them to tailor their DBSCAN-MR [6] is a similar approach which again imple- definition to domain-specific knowledge. ments DBSCAN as a 4-stage MapReduce algorithm, but uses a Our experimental evaluation supports these claims through an k-d tree for the single-machine implementation, and a partition- extensive comparison between NG-DBSCAN and alternative im- ing algorithm that recursively divides data in slices to minimize the plementations, on a variety of real

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us