Efficient Large Scale Clustering Based on Data Partitioning

Efficient Large Scale Clustering Based on Data Partitioning

Efficient Large Scale Clustering based on Data Partitioning Malika Bendechache Nhien-An Le-Khac M-Tahar Kechadi Insight Centre for Data Analytics School of Computer Science Insight Centre for Data Analytics School of Computer Science University College Dublin School of Computer Science University College Dublin Dublin, Ireland University College Dublin Dublin, Ireland Email: [email protected] Dublin, Ireland Email: [email protected] Email: [email protected] Abstract—Clustering techniques are very attractive for ex- models. In order to take advantage of the mined knowledge at tracting and identifying patterns in datasets. However, their appli- different locations, DDM should have a view of the knowledge cation to very large spatial datasets presents numerous challenges that not only facilitates their integration, but also minimises such as high-dimensionality data, heterogeneity, and high com- the effect of the local results on the global models. Briefly, plexity of some algorithms. For instance, some algorithms may an efficient management of distributed knowledge is one of have linear complexity but they require the domain knowledge in the key factors affecting the outputs of these techniques [2], order to determine their input parameters. Distributed clustering techniques constitute a very good alternative to the big data [3], [4], [5]. Moreover, the data that is collected and stored challenges (e.g.,Volume, Variety, Veracity, and Velocity). Usually in different locations using different instruments may have these techniques consist of two phases. The first phase generates different formats and features. Traditional, centralised data local models or patterns and the second one tends to aggregate mining techniques have not considered all the issues of data- the local results to obtain global models. While the first phase can driven applications, such as scalability in both response time be executed in parallel on each site and, therefore, efficient, the and accuracy of solutions, distribution, and heterogeneity [6]. aggregation phase is complex, time consuming and may produce Some DDM approaches are based on ensemble learning, which incorrect and ambiguous global clusters and therefore incorrect uses various techniques to aggregate the results [7], among the models. most cited in the literature: majority voting, weighted voting, In this paper we propose a new distributed clustering ap- and stacking [8], [9]. proach to deal efficiently with both phases; generation of local results and generation of global models by aggregation. For the DDM is more appropriate for large scale distributed plat- first phase, our approach is capable of analysing the datasets forms, where datasets are often geographically distributed and located in each site using different clustering techniques. The owned by different organisations. Many DDM methods such as aggregation phase is designed in such a way that the final clusters distributed association rules and distributed classification [10], are compact and accurate while the overall process is efficient [11], [12], [3], [13], [14] have been proposed and developed in time and memory allocation. For the evaluation, we use two in the last few years. However, only a few researches concern well-known clustering algorithms; K-Means and DBSCAN. One distributed clustering for analysing large, heterogeneous and of the key outputs of this distributed clustering technique is that distributed datasets. Recent researches [15], [16], [17] have the number of global clusters is dynamic; no need to be fixed in proposed distributed clustering approaches based on the same advance. Experimental results show that the approach is scalable 2-step process: perform partial analysis on local data at indi- and produces high quality results. vidual sites and then aggregate them to obtain global results. In Keywords—Big Data, spatial data, clustering, distributed min- this paper, we propose a distributed clustering approach based arXiv:1704.03421v2 [cs.DB] 26 Feb 2018 ing, data analysis, k-means, DBSCAN. on the same 2-step process, however, it reduces significantly the amount of information exchanged during the aggregation I. INTRODUCTION phase, and generates automatically the correct number of clusters. A case study of an efficient aggregation phase has Currently, one of the most critical data challenges which been developed on spatial datasets and proven to be very has a massive economic need is how to efficiently mine and efficient; the data exchanged is reduced by more than 98% manage all the data we have collected. This is even more of the original datasets [18]. critical when the collected data is located on different sites, and are owned by different organisations [1]. This led to the The approach can use any clustering algorithm to per- development of distributed data mining (DDM) techniques to form the analysis on local datasets. As can be seen in the deal with huge, multi-dimensional and heterogeneous datasets, following sections, we tested the approach with two well- which are distributed over a large number of nodes. Existing known centroid-based and density-based clustering algorithms DDM techniques are based on performing partial analysis on (K-Means and DBSCAN, respectively), and the results are of local data at individual sites followed by the generation of very high quality. More importantly, this study shows how global models by aggregating these local results. These two importance of the local mining algorithms, as the local clusters steps are not independent since naive approaches to local accuracy affects heavily the quality of the final models. analysis may produce incorrect and ambiguous global data 1 The rest of the paper is organised as follows: In the between processes or a global view of the dataset, or both next section we will give an overview of the state-of-the- [17]. art for distributed data mining and discuss the limitations of Another approach presented in [17] also applied a merg- traditional techniques. Then we will present the proposed dis- ing of local models to create the global models. Current tributed framework and its concepts in Section III. In Section approaches only focus on either merging local models or IV, we evaluated the distributed approach using two well- mining a set of local models to build global ones. If the local known algorithms; K-Means and DBSCAN. We discuss more models cannot effectively represent local datasets then global experimental results showing the quality of our algorithm’s models accuracy will be very poor [18]. Both partitioning results in Section V. Finally, we conclude in Section VI. and hierarchical categories suffer from some drawbacks. For the partitioning class, K-Means algorithm needs the number II. RELATED WORK of clusters fixed in advance, while in the majority of cases K DDM techniques can be divided into two categories based is not known. Furthermore, hierarchical clustering algorithms on the targeted architectures of computing platforms [19]. have overcome this limit: they do not need to provide the The first, based on parallelism, uses traditional dedicated and number of clusters as an input parameter, but they must define parallel machines with tools for communications between pro- the stopping conditions for clustering decomposition, which is cessors. These machines are generally called super-computers not an easy task. and are very expensive. The second category targets a network of autonomous machines. These are called distributed systems, III. DYNAMIC DISTRIBUTED CLUSTERING and are characterised by a distributed communication network We first describe the proposed Dynamic Distributed Clus- connecting low-speed machines that can be of different archi- tering (DDC) model, where the local models are based on the tectures, but they are very abundant [20]. The main goal of the boundaries of clusters. We also present an evaluation of the second category of techniques is to distribute the work among approach with different local clustering techniques including the system nodes and try to minimise the response time of centroid-based (K-Means) and density-based (DBSCAN) tech- the whole application. Some of these techniques have already niques. been developed and implemented in [21], [22]. However, the traditional DDM methods are not always effective, as they suffer from the problem of scaling. This has led to the development of techniques that rely on ensemble learning [23], [24]. These new techniques are very promising. Integrating ensemble learning methods in DDM, will allow to deal with the scalability problem. One solution to deal with large scale data is to use parallelism, but this is very expensive in terms of communications and processing power. Another solution is to reduce the size of training sets (sampling). Each system node generates a separate sample. These samples will be analysed using a single global algorithm [25], [26]. However, this technique has a disadvantage that the sampling depends on the transfer time which may impact on the quality of the samples. Clustering algorithms can be divided into two main cate- gories, namely partitioning and hierarchical. Different elabo- rated taxonomies of existing clustering algorithms are given in the literature. Many parallel clustering versions based on these algorithms have been proposed in the literature [17], [27], [28], [29], [30], [31], [32]. These algorithms are further classified into two sub-categories. The first consists of methods requiring multiple rounds of message passing. They require a significant amount of synchronisations. The second sub- category consists of methods that build local clustering models Fig. 1: An overview of the DDC Approach. and send them to a central site to build global models [18]. In [27] and [31], message-passing versions of the widely used K-Means algorithm were proposed. In [28] and [32], the authors dealt with the parallelisation of the DBSCAN density- The DDC approach includes two main steps. In the first based clustering algorithm. In [29] a parallel message passing step, as usual, we cluster the datasets located on each node of version of the BIRCH algorithm was presented. A parallel the system and select good local representatives.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us