Automatic Kernel Clustering with a Multi-Elitist Particle Swarm Optimization Algorithm

Automatic Kernel Clustering with a Multi-Elitist Particle Swarm Optimization Algorithm

Author's personal copy Available online at www.sciencedirect.com Pattern Recognition Letters 29 (2008) 688–699 www.elsevier.com/locate/patrec Automatic kernel clustering with a Multi-Elitist Particle Swarm Optimization Algorithm Swagatam Das a,*, Ajith Abraham b, Amit Konar a a Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700 032, India b Center of Excellence for Quantifiable Quality of Service (Q2S), Norwegian University of Science and Technology, Trondheim, Norway Received 3 February 2007; received in revised form 18 August 2007 Available online 15 December 2007 Communicated by W. Pedrycz Abstract This article introduces a scheme for clustering complex and linearly non-separable datasets, without any prior knowledge of the num- ber of naturally occurring groups in the data. The proposed method is based on a modified version of classical Particle Swarm Optimi- zation (PSO) algorithm, known as the Multi-Elitist PSO (MEPSO) model. It also employs a kernel-induced similarity measure instead of the conventional sum-of-squares distance. Use of the kernel function makes it possible to cluster data that is linearly non-separable in the original input space into homogeneous groups in a transformed high-dimensional feature space. A new particle representation scheme has been adopted for selecting the optimal number of clusters from several possible choices. The performance of the proposed method has been extensively compared with a few state of the art clustering techniques over a test suit of several artificial and real life datasets. Based on the computer simulations, some empirical guidelines have been provided for selecting the suitable parameters of the PSO algorithm. Ó 2007 Elsevier B.V. All rights reserved. Keywords: Particle Swarm Optimization; Kernel; Clustering; Validity index; Genetic algorithm 1. Introduction ing a sequence of clustering with each cluster being a par- tition of the data set (Leung et al., 2000). Partitional Clustering means the act of partitioning an unlabeled clustering algorithms, on the other hand, attempts to dataset into groups of similar objects. Each group, called decompose the data set directly into a set of disjoint clus- a ‘cluster’, consists of objects that are similar between ters. They try to optimize certain criteria (e.g. a squared- themselves and dissimilar to objects of other groups. In error function). The criterion function may emphasize the the past few decades, cluster analysis has played a central local structure of the data, as by assigning clusters to peaks role in diverse domains of science and engineering (Evang- in the probability density function, or the global structure. elou et al., 2001; Lillesand and Keifer, 1994; Andrews, Clustering can also be performed in two different modes: 1972; Rao, 1971; Duda and Hart, 1973; Fukunaga, 1990; crisp and fuzzy. In crisp clustering, the clusters are disjoint Everitt, 1993; Hartigan, 1975). and non-overlapping in nature. Any pattern may belong to Data clustering algorithms can be hierarchical or part- one and only one class in this case. In case of fuzzy cluster- itional (Frigui and Krishnapuram, 1999; Leung et al., ing, a pattern may belong to all the classes with a certain 2000). In hierarchical clustering, the output is a tree show- fuzzy membership grade (Jain et al., 1999). A comprehen- sive survey of the various clustering algorithms can be * Corresponding author. found in Jain et al. (1999). E-mail addresses: [email protected] (S. Das), ajith. The problem of partitional clustering has been [email protected] (A. Abraham), [email protected] (A. Konar). approached from diverse fields of knowledge like statistics 0167-8655/$ - see front matter Ó 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2007.12.002 Author's personal copy S. Das et al. / Pattern Recognition Letters 29 (2008) 688–699 689 (multivariate analysis) (Forgy, 1965), graph theory (Zahn, (ii) We propose a new version of the PSO algorithm 1971), expectation maximization algorithms (Mitchell, based on the multi-elitist strategy, well-known in 1997), artificial neural networks (Mao and Jain, 1995; the field of evolutionary algorithms. Our experiments Pal et al., 1993; Kohonen, 1995), evolutionary computing indicate that the proposed MEPSO algorithm yields (Falkenauer, 1998; Paterlini and Minerva, 2003; Murthy more accurate results at a faster pace than the classi- and Chowdhury, 1996; Bandyopadhyay and Maulik, cal PSO in context to the present problem. 2002), swarm intelligence (Paterlinia and Krink, 2006; (iii) We reformulate a recently proposed cluster validity Omran et al., 2005; Kanade and Hall, 2003) and so on. index (known as the CS measure)(Chou et al., The Euclidean distance metric, employed by most of the 2004) using the kernelized distance metric. The new existing partitional clustering algorithms, work well with CS measure forms the objective function to be mini- datasets in which the natural clusters are nearly hyper- mized for optimal clustering. spherical and linearly separable (like the artificial dataset 1 used in this paper). But it causes severe misclassifications We have undertaken extensive performance compari- when the dataset is complex, with linearly non-separable sons in order to establish the effectiveness of the proposed patterns (like the synthetic datasets 2, 3 and 4 described method in detecting clusters from several synthetic as well in Section 4 of the present paper). We would like to men- as real world datasets. Some empirical guidelines for choos- tion here that, most evolutionary algorithms could poten- ing the parameters of the MEPSO based clustering algo- tially work with an arbitrary distance function and are rithm has been provided. Effect of the growth of feature- not limited to the Euclidean distance. space dimensionality on the performance of the algorithm Moreover, very few works (Bandyopadhyay and Mau- was also studied based on the real life datasets. The rest lik, 2002; Hamerly and Elkan, 2003; Sarkar et al., 1997; of the paper is organised as follows: Section 2 briefly Omran et al., 2005) have been undertaken to make an algo- describes the clustering problem, the kernel distance metric rithm learn the correct number of clusters ‘k’ in a dataset, and the reformulation of the CS measure. In Section 3,we instead of accepting the same as a user input. Although, the briefly outline the classical PSO and then introduce the problem of finding an optimal k is quite important from a MEPSO algorithm. Section 4 describes the novel procedure practical point of view, the research outcome is still unsat- for automatic clustering with MEPSO. Experimental isfactory even for some of the benchmark datasets (Rosen- results are presented and discussed in Section 5. Finally berger and Chehdi, 2000). the paper is concluded in Section 6. In the present work, we propose a new approach towards the problem of automatic clustering (without hav- ing any prior knowledge of k initially) using a modified ver- sion of the PSO algorithm (Kennedy and Eberhart, 1995). 2. Kernel based clustering and corresponding validity index Our procedure employs a kernel induced similarity mea- sure instead of the conventional Euclidean distance metric. 2.1. The crisp clustering problem A kernel function measures the distance between two data points by implicitly mapping them into a high dimensional Let X ¼f~x1;~x2; ...;~xng be a set of n unlabeled patterns feature space where the data is linearly separable. Not only in the d-dimensional input space. Here, each element xi,j does it preserve the inherent structure of groups in the in the ith vector~xi corresponds to the jth real valued feature input space, but also simplifies the associated structure of ðj ¼ 1; 2; ...; dÞ of the ith pattern ði ¼ 1; 2; ...; nÞ. Given the data patterns (Muller et al., 2001; Girolami, 2002). Sev- such a set, the partitional clustering algorithm tries to find eral kernel-based learning methods, including the Support a partition C ={C1,C2,...,Ck}ofk classes, such that the Vector Machine (SVM), have recently been shown to per- similarity of the patterns in the same cluster is maximum form remarkably in supervised learning (Scholkopf and and patterns from different clusters differ as far as possible. Smola, 2002; Vapnik, 1998; Zhang and Chen, 2003; Zhang The partitions should maintain the following properties: and Rudnicky, 2002). The kernelized versions of the k- means (Forgy, 1965) and the fuzzy c-means (FCM) (Bez- (1) Ci 6¼ U "i 2 {1,2,...,k}. (2) SCi \ Cj = U "i 6¼ j and i,j 2 {1,2,...,k}. dek, 1981) algorithms reported in Zhang and Rudnicky K (2002) and Zhang and Chen (2003) respectively, have (3) i¼1Ci ¼ P. reportedly outperformed their original counterparts over several test cases. The most popular way to evaluate similarity between Now, we may summarize the new contributions made in two patterns amounts to the use of the Euclidean distance, the paper as follows: which between any two d-dimensional patterns ~xi and ~xj is given by vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (i) Firstly, we develop an alternative framework for u learning the number of partitions in a dataset besides uXd t 2 the simultaneous refining of the clusters, through one dð~xi;~xjÞ¼ ðxi;p À xj;pÞ ¼k~xi À~xjk: ð1Þ shot of optimization. p¼1 Author's personal copy 690 S. Das et al. / Pattern Recognition Letters 29 (2008) 688–699 2.2. The kernel based similarity measure literature are the Dunn’s index (DI) (Hertz et al., 2006; Dunn, 1974), Calinski–Harabasz index (Calinski and Given a

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us