Scalable Approximation of Kernel Fuzzy C-Means

Scalable Approximation of Kernel Fuzzy C-Means

2013 IEEE International Conference on Big Data Scalable Approximation of Kernel Fuzzy c-Means Zijian Zhang Timothy C. Havens Department of Electrical and Computer Engineering Department of Electrical and Computer Engineering Michigan Technological University Department of Computer Science Houghton, MI 49931 USA Michigan Technological University Email: [email protected] Houghton, MI, 49931 USA Email: [email protected] TABLE I Abstract—Virtually every sector of business and industry that IMPORTANT ACRONYMS AND NOTATION use computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their Acronym Definition business model. Sophisticated clustering algorithms are highly FCM fuzzy c-means desired to deduce the nature of data by assigning labels to KFCM kernel FCM unlabeled data. We address two main challenges in Big Data. stKFCM streaming KFCM First, by definition, the volume of Big Data is too large to be KPC kernel patch clustering [1] loaded into a computer’s memory (this volume changes based on RKHS reproducing kernel Hilbert space the computer used or available). Second, in real-time applications, Notation Definition the velocity of new incoming data prevents historical data from c or k number of clusters being stored and future data from being accessed. Therefore, we N number of objects propose our Streaming Kernel Fuzzy c-Means (stKFCM) algo- n number of objects in data chunk x feature vector 2 Rd rithm, which reduces both computational complexity and space X set of x complexity significantly. The proposed stKFCM only requires 2 X set of X, fX0;:::;Xt;:::g O(n ) memory where n is the (predetermined) size of a data U partition matrix subset (or data chunk) at each time step, which makes this ui ith column of U algorithm truly scalable (as n can be chosen based on the v cluster center available memory). Furthermore, only 2n2 elements of the full w weight vector N × N (where N >> n) kernel matrix need to be calculated φi kernel representation of xi, i.e., φ(xi) at each time-step, thus reducing both the computation time in Φ set of φ producing the kernel elements and the complexity of the FCM κ(xi; xj ) kernel function, κ(xi; xj ) = φi · φj K kernel matrix, K = [κ(x ; x )], 8i; j algorithm. Empirical results show that stKFCM, even with very i j HK reproducing kernel Hilbert space imposed by K small n, can provide clustering performance as accurately as 2 dκ(xi; xj ) kernel distance, jjφi − φj jj kernel fuzzy c-means run on the entire data set while achieving [i] set of integers, f1; 2; : : : ; ig a significant speedup. Index Terms—kernel clustering; fuzzy c-means; scalable algo- rithms; streaming data; projection proposed for solving this problem such as k-means, single- linkage clustering, and Gaussian-mixture-model clustering. I. INTRODUCTION The k-means algorithm is one of the most popular clustering The ubiquity of personal computing technology has pro- algorithms due to its simplicity. For a set of N feature duced an abundance of staggeringly large data sets which vectors (or objects), the program will choose k cluster centers may exceed the memory capacity of a computer (whether randomly at the beginning. Each data point will be assigned that computer is a cell phone or a high-performance cluster). to its nearest cluster center, then the cluster centers will be Large data sets come from streaming data as well those that recomputed. These steps are repeated until the algorithm con- are presented to a system sequentially such that future data verges (and there are many ways by which convergence can be cannot be accessed. These challenges stimulate a great need defined). The output will be a partition matrix U 2 f0; 1gN×k, for sophisticated algorithms by which one can elucidate the a matrix of Nk values. Each element uik is the membership similarity and dissimilarity among and between groups in these of vector xi in cluster k; and the partition matrix element gigantic data sets. uik = 1 if xi belongs to cluster k and is 0 otherwise. Clustering is an exploratory tool in which data are separated Fuzzy c-means (FCM) is analogous to the k-means algo- into groups, such that the objects in each group are more rithm with fuzzy partitions, which gives more flexibility in similar to each other than to those in different groups. The ap- that each object can have membership in more than one cluster. plications of clustering algorithms are innumerable. Clustering The constraint on fuzzy partitions is that all the memberships is a common technique widely used in pattern recognition, data of an object must sum to 1, thus ensuring that every object has P mining, and data compression for deducing the nature of a data unit total membership in a partition: ( k uik = 1). Hence, the set by assigning labels to unlabeled data. Clustering itself is a partition matrix U 2 [0; 1]N×c, where the partition elements general task to be solved and plenty of algorithms have been are now on the interval [0; 1]. 978-1-4799-1293-3/13/$31.00 ©2013 IEEE 161 The FCM (as well as the k-means) model is based on the termination criteria are the most central components of any of assumption that the feature vectors are grouped in similarly- these approaches. The most well-known progressive sampling sized hyperspheres. Kernel methods can overcome this lim- method is generalized extensible fast FCM [12] which is the itation by projecting the vectors into a higher dimensional extension of [10]. In [12], the algorithm starts with statistics- feature space where the patterns can be discovered as linear based progressive sampling and terminates with a representa- relations [2]. Consider some non-linear mapping function tive sample that is appropriate to capture the overall nature of Dκ φ : x ! φ(x) 2 R where Dκ is the dimensionality of the data set. Reference [12] extends this algorithm so that it the higher-dimensional feature space created by the function φ. could be applied to more general cases. Instead of clustering For (most) kernel algorithms, including kernel FCM, explicitly numerical object data, [13] and [14] extend the algorithm to transforming x is not necessary. Instead, a kernel matrix attack the problem of clustering numerical relational data. In K is used, which consists of the pairwise dot products of kernel clustering, cluster centers are linear combinations of the feature vectors in a transformed high dimensional space all the data points to be clustered; hence, the sampling ap- HK ; this space is called the Reproducing Kernel Hilbert proaches mentioned previously are inappropriate. The authors Space (RKHS). Given a set of N objects, literal kernel of [15] and [8] tackled the problem of kernel clustering by FCM (KFCM) requires to store an N × N kernel matrix proposing a novel sampling of the kernel matrix which results K = [κ(xi; xj)] = [φ(xi) · φ(xj)], i; j 2 [N], which poses in significant memory savings and computational complexity challenges for clustering Big Data. Although an abundance of reduction while maintaining a bounded-error solution to the research has been conducted on fuzzy clustering algorithms kernel k-means and KFCM problem; although, the solutions for Big Data [1, 3–9], only a select few of these algorithms in [8, 15] are not truly scalable to big data as they require the are appropriate for kernel methods. loading of an N ×n rectangular of the kernel matrix; hence, as In this paper, we devise an approximation of the KFCM N grows, eventually there is a point at which the only loadable algorithm for streaming data, i.e., big data which could be n becomes less than 1, thus invalidating the scalability of the viewed as data streams. We assess the performance of our algorithm. algorithm by comparing the partition matrix to that of the literal KFCM. Empirical results demonstrate that our algorithm B. Distributed clustering could provide similar results to the literal KFCM while only Distributed clustering algorithms could be classified into requiring access to small data chunks in the current and two types: incremental loading of data subsets that can be previous time step. The memory requirement is reduced from fit into current memory capacity and divide-and-conquer ap- O(N 2) to O(n2), where N and n are the size of the full data proaches. set and each subset data chunk, respectively. 1) Incremental clustering: Algorithms in this category se- quentially load small chunks or samples of the data, clustering II. RELATED WORK each chunk in a single pass, and then combining the results To date, a substantial number of algorithms have been from each chunk. Representative algorithms are [16] and [17]. developed for clustering big data. Roughly, these algorithms In these approaches, the clusters are updated periodically using can be categorized into three classes: sampling, distributed information from both the incoming data and obsolete data. clustering, and data transformation algorithms. Single pass FCM (spFCM) was proposed in [3]. This algorithm performs weighted FCM (wFCM) on sequential A. Sampling and non-iterative extension chunks of data, passing clustering centers from each chunk Sampling the data set is the most basic and obvious way to onto the next. At each time step, the algorithm clusters a union address big data. In sampling methods, algorithms are run on a set consisting of data in the current step and the cluster centers reduced representative sample, and then the sample partition is passed from previous step. The authors of [4] extend [3] by non-iteratively extended to approximate the clustering solution incorporating more results from multiple preceding time steps for the remaining data in the full data set.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us