J. Parallel Distrib. Comput. Designing an Efficient Parallel Spectral

J. Parallel Distrib. Comput. Designing an Efficient Parallel Spectral

Journal of Parallel and Distributed Computing 138 (2020) 211–221 Contents lists available at ScienceDirect J. Parallel Distrib. Comput. journal homepage: www.elsevier.com/locate/jpdc Designing an efficient parallel spectral clustering algorithm on multi-core processors in Julia ∗ Zenan Huo a, Gang Mei a, , Giampaolo Casolla b, Fabio Giampaolo c a School of Engineering and Technology, China University of Geosciences (Beijing), 100083, Beijing, China b Department of Mathematics and Applications ``R. Caccioppoli", University of Naples FEDERICO II, Italy c Consorzio Interuniversitario Nazionale per l'Informatica (CINI), Italy article info a b s t r a c t Article history: Spectral clustering is widely used in data mining, machine learning and other fields. It can identify Received 14 December 2019 the arbitrary shape of a sample space and converge to the global optimal solution. Compared with Received in revised form 5 January 2020 the traditional k-means algorithm, the spectral clustering algorithm has stronger adaptability to data Accepted 12 January 2020 and better clustering results. However, the computation of the algorithm is quite expensive. In this Available online 20 January 2020 paper, an efficient parallel spectral clustering algorithm on multi-core processors in the Julia language Keywords: is proposed, and we refer to it as juPSC. The Julia language is a high-performance, open-source Clustering algorithm programming language. The juPSC is composed of three procedures: (1) calculating the affinity matrix, Spectral clustering (2) calculating the eigenvectors, and (3) conducting k-means clustering. Procedures (1) and (3) are Parallel algorithm computed by the efficient parallel algorithm, and the COO format is used to compress the affinity Multi-core processors matrix. Two groups of experiments are conducted to verify the accuracy and efficiency of the juPSC. Julia language Experimental results indicate that (1) the juPSC achieves speedups of approximately 14×∼ 18× on a 24-core CPU and that (2) the serial version of the juPSC is faster than the Python version of scikit- learn. Moreover, the structure and functions of the juPSC are designed considering modularity, which is convenient for combination and further optimization with other parallel computing platforms. ' 2020 Elsevier Inc. All rights reserved. 1. Introduction processing, biological science, social science, and psychology [19,44,51]. The basic principle of clustering analysis is to divide In recent years, machine learning has made great progress and the data into different clusters. Members in the same cluster have become the preferred method for developing practical software, similar characteristics, and members in different clusters have such as computer vision, speech recognition, and natural lan- different characteristics. The main types of clustering algorithms guage processing [37,45,50,55]. Machine learning mainly includes include partitioning methods, hierarchical clustering, fuzzy clus- supervised learning and unsupervised learning. Clustering is the tering, density-based clustering, and model-based clustering [38]. main content of unsupervised learning. Among many clustering The most widely used clustering algorithms are k-means [61], algorithms, the spectral clustering algorithm has become the DBSCAN [39], ward hierarchical clustering [47], spectral cluster- popular one [32,53]. Spectral clustering is a technology origi- ing [53], birch algorithm [66], etc. nating from graph theory [17,29] that uses the edge connecting It has been proven that the spectral clustering algorithm is them to identify the nodes in the graph and allows us to cluster more effective than other traditional clustering algorithms in ref- non-graphic data. erences [46,56], but in the process of spectral clustering compu- Unsupervised clustering analysis algorithm can explore the tation, the affinity matrix between nodes needs to be constructed, internal group structure of data, which has been widely used in and storage of the affinity matrix requires much memory. It also various data analysis occasions, including computer vision ana- takes a long time to achieve the first k eigenvectors of the Lapla- lysis, statistical analysis, image processing, medical information cian matrix. Thus, the spectral clustering algorithm is difficult to apply in the large-scale data processing. For the large-scale spectral clustering problem, we usually Abbreviations: COO, Coordinate Format; CSC, Compressed Sparse Column adopt approximate technology to solve the dense matrix and Format; CPU, Central Processing Unit; FEM, Finite Element Method; GPU, its operation. For example, the Nyström expansion method [30] Graphics Processing Unit; JIT, Just-in-time Compilation; MKL, Intel Math Kernel Library; MPM, Material Point Method avoids directly calculating the overall affinity matrix while en- ∗ Corresponding author. suring the accuracy. Several methods are available for achiev- E-mail address: [email protected] (G. Mei). ing the purpose of the sparse matrix [43]. In recent research, https://doi.org/10.1016/j.jpdc.2020.01.003 0743-7315/' 2020 Elsevier Inc. All rights reserved. 212 Z. Huo, G. Mei, G. Casolla et al. / Journal of Parallel and Distributed Computing 138 (2020) 211–221 Deng et al. [18] proposed a landmark-based spectral clustering are set to have a certain similarity, which is expressed by the algorithm, which scales linearly with the problem size. weight of edge E between the two nodes; thus, an undirected In addition to the improvement of the spectral clustering weighted graph G D .V ; E/ is obtained. The optimal partition algorithm, many researchers have also focused on the parallel criterion based on graph theory is to make the similarity of algorithm. Gou et al. [33] constructed a sparse spectral clustering the nodes in the final partition result be the maximum and the framework based on the parallel computation of MATLAB. Jin similarity of the nodes belonging to different subgraphs be the et al. [40] combined spectral clustering with MapReduce and, minimum. through the evaluation of sparse matrix eigenvalues and the In the spectral clustering algorithm, we construct an undi- computation of distributed clustering, improved the clustering rected graph based on the similarity between the data and con- speed of the spectral clustering algorithm. struct the adjacency matrix according to the similarity between The existing spectral clustering algorithms are implemented the nodes. We turn the problem into the optimal partitioning by static programming languages, such as C/C++ or Fortran. Al- problem of graph G. The choice of partitioning criteria will di- though there is a certain guarantee of the execution efficiency, rectly affect the final clustering result. The common partition high programming skill is required, and code maintenance is rules in graph theory are Minimum cut [54], Normalized cut difficult, which will lead to more time spent on design and im- (N-cut) [27], Ratio cut [34], Average cut [64], and Min–max plementation. Advanced dynamic languages, such as Python and cut [56]. We construct a new eigenspace using the eigenvectors MATLAB, have good interactivity, and the code is easier to read. corresponding to the first k eigenvalues of the Laplacian matrix Researchers can concentrate on algorithm design rather than and use traditional clustering algorithms, such as k-means in the program debugging, but at the cost of computational efficiency. new eigenspace. The details of the spectral clustering algorithm The Julia language is a new programming language that suc- using N-cut are as follows. cessfully combines the high performance of static programming Step 1. Defining graph notation languages with the agility of dynamic programming language The given data corresponds to the nodes of the graph, and [16]. The Julia language enables programmers to implement al- the edges between nodes are weighted so that the undirected gorithms naturally and intuitively by introducing easy-to-under- weighted graph G: stand syntax. Julia type stability through specialization via D D { } ⊆ × multiple dispatch makes it easy to compile programs into ef- G .V ; E/ ; E .i; j/ ; Si;j > 0 V V (1) ficient code. Julia is widely used in machine learning. There where V D f1;:::; ng is the node-set and E is the edge-set. are many excellent packages of clustering algorithms on Julia Then the clustering problem is transformed into the optimal Observer [1], such as Clustering.jl [2], ScikitLearn.jl [3], partition problem of graph G. Graph G can be divided into two and QuickShiftClustering.jl [4]. The package of Clus- disjoint sets A and B (i.e., A [ B D V and A \ B D;): tering.jl not only implements a variety of clustering algo- X rithms, but also provides many methods to evaluate the results cut .A; B/ D Wu;v (2) of clustering algorithms or verify the correctness. u2A;v2B To combine the performance advantages of the Julia language and the characteristics of the parallel algorithm, we have de- Step 2. Calculating the affinity matrix signed and implemented an efficient parallel spectral clustering According to the similarity between nodes, the spectral clus- algorithm on multi-core processors in the Julia language. We refer tering algorithm divides the categories. In the construction of to it as juPSC. To the best of the authors' knowledge, the juPSC is the similarity graph, the accurate relationship between the local neighborhood of nodes can reflect the real clustering structure. the first high-performance

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us