
Scalable Spectral Clustering Using Random Binning Features Lingfei Wu Pin-Yu Chen Ian En-Hsu Yen IBM Research AI IBM Research AI Carnegie Mellon University [email protected] [email protected] [email protected] Fangli Xu Yinglong Xia Charu Aggarwal College of William and Mary Huawei Research IBM Research AI [email protected] [email protected] [email protected] ABSTRACT ACM Reference Format: Spectral clustering is one of the most effective clustering approaches Lingfei Wu, Pin-Yu Chen, Ian En-Hsu Yen, Fangli Xu, Yinglong Xia, and Charu Aggarwal. 2018. Scalable Spectral Clustering Using Random Binning Fea- that capture hidden cluster structures in the data. However, it does tures. In KDD ’18: The 24th ACM SIGKDD International Conference on Knowl- not scale well to large-scale problems due to its quadratic complex- edge Discovery & Data Mining, August 19–23, 2018, London, United Kingdom. ity in constructing similarity graphs and computing subsequent ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3219819.3220090 eigendecomposition. Although a number of methods have been proposed to accelerate spectral clustering, most of them compro- 1 INTRODUCTION mise considerable information loss in the original data for reducing computational bottlenecks. In this paper, we present a novel scal- Clustering is one of the most fundamental problems in machine able spectral clustering method using Random Binning features (RB) learning and data mining tasks. In the past two decades, spectral to simultaneously accelerate both similarity graph construction clustering (SC) [6, 21, 24, 28] has shown great promise for learning and the eigendecomposition. Specifically, we implicitly approxi- hidden cluster structures from data. The superior performance of mate the graph similarity (kernel) matrix by the inner product of a SC roots in exploiting non-linear pairwise similarity between data large sparse feature matrix generated by RB. Then we introduce a instances, while traditional methods like K-means heavily rely on state-of-the-art SVD solver to effectively compute eigenvectors of Euclidean geometry and thus place limitations on the shape of this large matrix for spectral clustering. Using these two building the clusters [5, 13, 35]. However, SC methods are typically not blocks, we reduce the computational cost from quadratic to linear the first choice for large-scale clustering problems since modern in the number of data points while achieving similar accuracy. Our datasets exhibit great challenges in both computation and memory theoretical analysis shows that spectral clustering via RB converges consumption for computing the pairwise similarity matrix W 2 N ×N faster to the exact spectral clustering than the standard Random R , where N denotes the number of data points. In particular, N ×d Feature approximation. Extensive experiments on 8 benchmarks given a data matrix X 2 R whose underlying data distribution show that the proposed method either outperforms or matches the can be represented as K weakly inter-connected clusters, it requires 2 2 state-of-the-art methods in both accuracy and runtime. Moreover, O¹N º space to store the matrix and O¹N dº complexity to compute 2 3 our method exhibits linear scalability in both the number of data W, and at least takes O¹KN º or O¹N º complexity to compute samples and the number of RB features. K eigenvectors of the corresponding graph Laplacian matrix L, depending on whether an iterative or a direct eigensolver is used. CCS CONCEPTS To accelerate SC, many efforts have been devoted to address the following computational bottlenecks: 1) pairwise similarity graph • Computing methodologies → Artificial intelligence; Ma- construction of W from the raw data X, and 2) eigendecomposition chine learning; Unsupervised learning; Cluster analysis; of the graph Laplacian matrix L. A number of methods have been proposed to accelerate the KEYWORDS eigendecomposition, e.g., randomized sketching and power method arXiv:1805.11048v3 [cs.LG] 25 Nov 2019 [14, 17], sequential reduction algorithm toward an early-stop strat- Spectral clustering; Graph Construction; Random Binning Features; egy [3, 19], and graph filtering of random signals27 [ ]. However, Large-Scale Graph; PRIMME these approaches only partially reduce the computation cost of the eigendecomposition, since the construction of similarity graph matrix W still requires quadratic complexity for both computation and memory consumption. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed Another family of research is the use of Landmarks or representa- for profit or commercial advantage and that copies bear this notice and the full citation tives to jointly improve the computation efficiency of the similarity on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or matrix W and the eigendecomposition of L. One strategy is per- republish, to post on servers or to redistribute to lists, requires prior specific permission forming random sampling or K-means on the dataset to select a and/or a fee. Request permissions from [email protected]. small number of representative data points and then employing KDD ’18, August 19–23, 2018, London, United Kingdom SC on the reduced dataset [25, 35]. Another strategy [9, 16, 18, 23] © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-5552-0/18/08...$15.00 is approximating the similarity matrix W by a low rank affinity https://doi.org/10.1145/3219819.3220090 matrix Z 2 RN ×R , which is computed via either random projection KDD ’18, August 19–23, 2018, London, United Kingdom Lingfei Wu, Pin-Yu Chen, Ian En-Hsu Yen, et al. or a bipartite graph between all data points and selected anchor of RB features R. In both cases, our method exhibits linear points [20]. Furthermore, a heuristic that only selects a few near- scalability with respect to N or R. est anchor points has been applied to build a KNN-based sparse graph similarity matrix. Despite promising results in accelerating 2 SPECTRAL CLUSTERING AND RANDOM SC, these approaches disregard considerable information in the BINNING raw data and may lead to degrading clustering performance. More We briefly introduce the SC algorithms and then illustrate RB,an importantly, there is no guarantee that these heuristic methods can important building block to our proposed method. Here are some approach the results of standard SC. notations we will use throughout the paper. Another line of research [10, 11, 13, 33, 34] focuses on leveraging various kernel approximation techniques such as Nyström [29], 2.1 Spectral Clustering Random Fourier [4, 22, 32], and random sampling to accelerate » ¼ N ×d 2 similarity matrix construction and the eigendecomposition at the Given a set of N data points x1;:::; x N = X , with xi d N ×N same time. The pairwise similarity (kernel) matrix W (a weighted R , the SC algorithm constructs a similarity matrix W 2 R fully-connected graph) is then directly approximated by an inner representing an affinity matrix G = ¹V ; Eº, where the node vi 2 product of the feature matrix Z computed from the raw data. Al- V represents the data point xi and the edge Eij represents the though a KNN-based graph construction allows efficient sparse similarity between xi and x j . representation, pairwise method takes full advantage of more com- The goal of SC is to use W to partition x1;:::; x N into K clusters. plete information in the data and takes into account the long-range There are several variants of SC. Without lose of generality, we connections [8, 13]. The drawback of pairwise methods is the high consider the widely used normalized spectral clustering [21]. To computational costs in requiring the similarity between every pair fully utilize complete similarity information, we consider a fully of data points. Fortunately, we present an approximation technique connected graph instead of a KNN-based graph for SC. An example applicable to SC that significantly alleviates this computational similarity (kernel) function is the Gaussian Kernel: 2 bottleneck. As our work focuses on enabling the scalability of SC kxi − x j k k¹xi ; x j º = exp − (1) using RB, the case of robust spectral clustering on noisy data, such 2σ 2 as [1], could be applied but is beyond the scope of this paper. where σ is the bandwidth parameter. The normalized graph Lapla- In this paper, inspired by recent advances in the fields of ker- cian matrix L is defined as: nel approximation and numerical linear algebra [7, 22, 30–32], we −1/2 −1/2 −1/2 −1/2 present a scalable spectral clustering method and theoretic analysis L = D ¹D − WºD = I − D WD (2) to circumvent the two computational bottlenecks of SC in large- where D 2 RN ×N is the degree matrix with each diagonal element scale datasets. We highlight the following main contributions: ÍN Dii = j=1 Wij . The objective function of normalized SC can be defined as [24]: (1) We present for the first time the use of Random Binning min trace¹UT LUº; (3) features (RB) [22, 32] for scaling up the graph construction U2RN ×K ;UT U=I of similarity matrix in SC, which is implicitly approximated where trace¹·º denotes the matrix trace, I denotes the identity ma- by the inner product of the RB sparse feature matrix Z 2 trix, and the constraint UT U = I implies orthonormality on the × RN D derived from the raw dataset, where each row has columns of U. We further denote U∗ 2 RN ×K as the optimizer of nnz¹Z¹i; :ºº = R.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-