A Nonparametric Information Theoretic Clustering Algorithm

A Nonparametric Information Theoretic Clustering Algorithm

A Nonparametric Information Theoretic Clustering Algorithm Lev Faivishevsky [email protected] Jacob Goldberger [email protected] School of Engineering, Bar-Ilan University, Ramat-Gan 52900, Israel Abstract the spectral clustering algorithms (Ng et al., 2002), (Zelnik-Manor & Perona, 2005), that have attracted In this paper we propose a novel clustering much attention in recent years. The second class of algorithm based on maximizing the mutual clustering algorithms also admits input in the form of information between data points and clus- vectors in Rd but in addition implicitly or explicitly ters. Unlike previous methods, we neither assumes certain types of in-cluster distribution (e.g. assume the data are given in terms of distri- applying the EM algorithm to learn a Gaussian mix- butions nor impose any parametric model on ture density). Although these iterative methods can the within-cluster distribution. Instead, we suffer from the drawback of local optima, they pro- utilize a non-parametric estimation of the av- vide high quality results when the data clusters are erage cluster entropies and search for a clus- organized according to the anticipated structures, in tering that maximizes the estimated mutual this case in convex sets. When the data are arranged information between data points and clus- in non-convex sets (e.g. concentric circles) these algo- ters. The improved performance of the pro- rithms tend to fail. It follows that in a certain sense posed algorithm is demonstrated on several the second kind of alogrithms is a subset of the first standard datasets. kind. The third kind of algorithms corresponds to the case of distributional clustering. Here each data point is described as a distribution. In other words the fea- 1. Introduction ture representation of each data point is a parametric description of the distribution. Both discrete and con- Effective automatic grouping of objects into clusters tinuous distributions may be considered. The former is one of the fundamental problems in machine learn- case is illustrated by a generic example of document ing and in other fields of study. In many approaches, clustering. In a continuous setup we can consider the the first step toward clustering a dataset is extracting problem where each object is a Gaussian distribution a feature vector from each object. This reduces the and we want to cluster similar Gaussians together. In problem to the aggregation of groups of vectors in a all these cases the cluster distribution is a (possibly feature space. Then various clustering algorithms are weighted) average of the distributions of the objects applied on these feature vectors. The specific form of that are assigned to the cluster. Hence the third kind the feature space along with possible additional in- of algorithms is a subset of the second kind. formation about cluster structure determine a class of algorithms that may be used to group the vectors. The relative entropy or the Kullback-Leibler diver- According to the required form of input, three major gence is a natural measure of the distance between kinds of clustering algorithms may be defined. distributions. Therefore this quantity is of particu- lar importance in the field of distributional clustering. The first kind of algorithms assumes that the fea- Given such a choice for distance, the mutual informa- ture vectors are given as points in a finite-dimensional tion becomes an optimal clustering criterion (Banerjee space Rd without additional information on the clus- et al., 2004). In practice, the mutual information is ters structure. Distances between vectors may natu- computed between cluster labels and feature represen- rally give rise to pairwise data point similarities. The tations of data points in terms of distributions (Dhillon class of methods that cluster vectors in Rd includes et al., 2003). Similar ideas gave rise to the Informa- th Appearing in Proceedings of the 27 International Confer- tion Bottleneck approach (Tishby et al., 1999). The ence on Machine Learning, Haifa, Israel, 2010. Copyright mutual information has been proven to be a powerful 2010 by the author(s)/owner(s). clustering criterion for document clustering (Slonim & A Nonparametric Information Theoretic Clustering Algorithm Tishby, 2000),(Slonim et al., 2002) and clustering of the intra-cluster entropy resembles the k-means score Gaussians (Davis & Dhillon, 2007). However all the that measures the intra-cluster variance. Clearly, the above methods are limited to the domain of distri- MI provides a more robust treatment for various cases butional clustering since they require an explicit para- of differently distributed data as discussed below. This metric representation of data points. In all these meth- measure is intuitive; we expect that in a good clus- ods there is an explicit assumption regarding the para- tering the objects in the same cluster will be similar, metric structure of the intra cluster distribution. whereas similar objects will not be assigned to differ- ent clusters. Expressing this intuition into informa- In this paper we extend the information theoretic cri- tion theory terminology, we expect that the average terion to a general domain of clustering algorithms entropy of the object distribution in a cluster will be whose inputs are simply vectors in Rd. In partic- small. This is obtained by maximizing I(X; C). ular, we maximize the mutual information between cluster labels and features of data points without im- To compile the MI cost function into a clustering al- posing any parametric model on the cluster distribu- gorithm we have to tackle the technical issue of com- tion. Our method computes this target in an intuitive puting the within-cluster entropy terms H(X|C = j). straightforward manner using a novel non-parametric The simplest case is when the objects all belong to a entropy estimation technique (Faivishevsky & Gold- finite set. In this case the distribution p(X|C = j) is berger, 2009). We show that this results in an efficient discrete and the entropy can be computed based on the clustering method with state-of-the-art performance frequency histogram of the objects in the cluster. We on standard real-world datasets. The reminder of this demonstrate this on the generic problem of unsuper- paper is organized as follows. Section 2 discusses the vised document clustering. Utilizing the bag of words mutual information criterion of clustering in detail. paradigm (Salton & McGill, 1983), each document is Section 3 introduces the Nonparametric Information viewed as a bag containing all the words that appear in Clustering (NIC) algorithm. Section 4 reviews related it and each cluster can be viewed as a bag containing work. Section 5 describes numerical experiments on all the words from all the documents that are mapped several standard datasets. into that cluster. More formally, each document i is i i i i represented by a vector {n1, n2,...,nM }, where nw is 2. The Mutual Information Criterion the number of instances of word w in the document and for Clustering M is the size of the word dictionary. Given a document clustering C we can easily compute the word statistics A (hard) clustering of a set of objects X = {x1,...,xn} in the cluster. Defining the average frequency of word into nc clusters is a function C : X → {1,...,nc}. De- w occurrence in the cluster j by: note the cluster of xi by ci. Denote the number of p(w|C = j) ∝ ni (2) points assigned to the j cluster by nj . Given a cluster- X w = ing score function S(C), the task of clustering the set i|ci j X is finding a clustering C(X) that optimizes S(C). we arrive at the within cluster entropy: A clustering task is defined by the object description and the score function S(C). M H(X|C = j)= − X p(w|C = j) log p(w|C = j) (3) In this paper we consider the data points as inde- w=1 pendent samples of a distribution that can be ei- ther given as part of the problem statement or un- The criterion I(X; C) in this context is also known known. The clustering C is a function of the ran- as the Information-Bottleneck (IB) principle (Tishby dom variable X and therefore C(X) is also a random et al., 1999). (In the usual definition of the IB we fur- variable. Hence we can define the mutual informa- ther assume a uniform prior over the document, which tion I(X; C) based on the joint distribution. Since means that if we want to make the above framework I(X; C) = H(X) − H(X|C) and H(X) does not de- consistent with IB we need to weight each word to be pend on the specific clustering, we can use the condi- inversely proportional to the document size). tional entropy H(X|C) as a measure of the clustering There is a subtle point here that needs to be clarified. quality: The task we want to perform here is clustering the documents in the corpus such that all the documents nc n S (C)= H(X|C)= j H(X|C = j) (1) in a given group are related to the same topic. How- MI X n j=1 ever, in the mutual information framework described above, technically the objects to be clustered are the The mutual information score function that measures words. The entropy H(X|C = j) we compute in the A Nonparametric Information Theoretic Clustering Algorithm expression (3) is the word entropy in the cluster. The 3. Nonparametric Mutual Information document structure is used to place a semi-supervised Clustering constraint that all the words in a given document will be assigned to the same cluster. Assume that a dataset X is represented by a set of fea- d tures x1,...,xn ∈ R without any additional informa- The situation is more complicated if the objects we tion on the feature distributions either for individual want to cluster do not belong to a finite set but in- objects or for objects that are in the same cluster.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us