
MEAN SHIFT SPECTRAL CLUSTERING Umut Ozertem1, Deniz Erdogmus1, Robert Jenssen2 1 CSEE Department, Oregon Health & Science University, Portland, Oregon, USA 2 Department of Physics, University of Tromsø, Tromsø, Norway Abstract. In recent years there has been a growing interest in clustering methods stemming from the spectral decomposition of the data affinity matrix, which are shown to present good results on a wide variety of situations. However, a complete theoretical understanding of these methods in terms of data distributions is not yet well understood. In this paper, we propose a spectral clustering based mode merging method for mean shift as a theoretically well-founded approach that enables a probabilistic interpretation of affinity based clustering through kernel density estimation. This connection also allows principled kernel optimization and enables the use of anisotropic variable-size kernels to match local data structures. We demonstrate the proposed algorithm’s performance on image segmentation applications and compare its clustering results with the well-known Mean Shift and Normalized Cut algorithms. KEYWORDS: Similarity based clustering, Nonparametric density estimation, Mean shift, Connected components, Spectral clustering 1. INTRODUCTION Clustering has a wide range of applications on several aspects of unsupervised learning; hence, it is a fundamental problem in machine learning. Applications include image segmentation, data mining, data compression, and speech recognition; to name a few. In recent years, a number of authors suggested clustering methods based on the eigendecomposition of a suitable affinity matrix. Such methods are known as spectral clustering and are considered to be among the most effective methods in the literature. There are several matrix and affinity measures that lead to different spectral clustering algorithms [8,11,16]. The affinity measures that characterize the similarities do not even have to obey the metric axioms except the symmetry property. Spectral clustering is conceptualized with the use of the second smallest eigenvector of the Laplacian matrix to bi-partition the data [1]. Recently, a number of related clustering methods are suggested that are related to the use eigenvectors or generalized eigenvectors of the affinity matrix. The majority of the spectral clustering algorithms can be interpreted as some variant of graph cut methods [2,3,4], where multiway cuts have also been investigated [5,6]. In addition to these, studies related to the spectral methods are presented in [7,8,9,10,11,14]. Spectral methods are sensitive to the definition of the affinity measure, and choosing a suitable affinity measure is central to this approach. Since no theoretical criterion for choosing the functions to assign the affinities is present in the literature, these algorithms require the assumption of the existence of a suitable affinity definition. Typically, Mercer kernels are utilized as affinity measures, such as the widely used Gaussian kernel. A different track in spectral clustering was designated by Scott and Longuet-Higgins [12] and later improved by Ng and colleagues [13], where they propose a mapping that uses the eigenvectors of the affinity matrix to transform the data from the original data space to the kernel induced feature space (KIFS), and the actual clustering is performed on the projection of the data in that space. Normalization of the transformed data is an important step in this approach, and clustering of the projected data in the KIFS was shown to be generating very successful results for a variety of different data sets. In this approach, spectral clustering problem becomes a technique for measuring data similarities by an inner product defined in the KIFS. For any Mercer kernel, the kernel trick defines a technique to compute inner products in the potentially infinite dimensional KIFS. This transformation relies on the assumption that the clustering in the KIFS is easier than in the original data space. In practice, however, this assumption does not hold for all Mercer kernels, and one should search for an optimal kernel design that satisfies this property. Kernel optimization is known to be a tedious task, and it remains unsolved to the satisfaction of the machine learning community since there are no general and practical propositions in the literature. Furthermore, typically a single kernel does not describe data affinities consistently throughout the whole sample and multiple kernel widths have been used heuristically. To determine a suitable kernel, we use the connection of similarity based kernel methods with kernel density estimation to utilize results from the nonparametric density estimation literature [16]. Mean shift is an iterative nonparametric clustering approach introduced by Fukunaga and Hostetler [15]. This procedure is used for seeking the modes of a probability density function represented by a finite set of samples. Mean shift formulation is revisited by Cheng [17], which made its potential uses in clustering and global optimization more noticeable, and the mean shift algorithm gained popularity [18,19]. Independently, a similar fixed-point algorithm for finding the modes of a Gaussian mixture was proposed and mean shift was shown to be equivalent to expectation maximization (EM) [20,21]. Spectral clustering algorithms require the computation of the eigenvectors of the N × N affinity matrix, where N is the number of samples. The computational complexity of the eigenvector calculation is O(N2) per eigenvector, which makes them impractical to use for very large data sets. Typically, by assuming kernels with finite support, the affinity matrices can be made sparse in order to employ efficient techniques such as the Lanczos method. We propose a mode affinity based clustering algorithm stemming from a variable-size kernel density estimate of the underlying data distribution, which motivates a mean shift like algorithm to represent the data in a much smaller affinity matrix whose size depends on the number of modes of the density estimate. Throughout the paper, we refer to data samples attracted by the same mode in mean shift algorithm as partition. We form the affinity matrix between partitions by evaluating a suitable density distance measure and can be processed by standard spectral clustering techniques to determine the final clustering solution. The computational complexity of the second step is negligible compared to other spectral techniques, since the number of modes is much less than the number of samples. The bottleneck is the mean-shift iterations, for which simplifying propositions are discussed. The proposed method is well founded on nonparametric density estimation theory, and the resulting clustering approximates the nonparametric maximum likelihood solution. 2. MEAN SHIFT SPECTRAL CLUSTERING In this section the details of the proposed method will be discussed. First, we present a brief overview of spectral clustering and then the mean shift in the context of kernel density estimation. Spectral Clustering: Given a set of data vectors {x1,…,xN} and a suitable kernel function K(xi,xj) to measure the affinities, the affinity matrix K and the normalized Laplacian matrix L are −1/ 2 −1/ 2 K ij = K (xi ,x j ) , Lij = Di K ij D j (1) where Di is the normalization term given by . (2) Di = ∑ j K ij There are a number of different approaches based on the eigendecomposition of either one of K and L matrices. Due to the improved eigenspread it provides, L is the usual choice [8]. Some of these approaches are: 1. Threshold the largest eigenvector of K [4]. 2. Threshold the second smallest eigenvector of L [3]. 3.Transform the data to the KIFS using the eigenvectors of K or L and use a simple clustering algorithm in that domain [12]. Mean Shift Algorithm: The mean shift algorithm is a mode detection procedure based on the density gradient estimation of the data. Given the data set and a kernel function Kσ(.,.), where σ denotes the kernel size, the kernel density estimate (KDE) becomes p(x)= (1/ N) N K (x − x ) (3) ∑i=1 σ i i In general, the kernel size could take a different full covariance form for each sample. We experiment with different choices in our simulations. Using (3), the gradient of the probability density of the data is estimated and the local maxima points yc are obtained. At these points, the gradient becomes null and the Hessian is negative (semi-)definite: 2 ∇pˆ K (y c ) = 0 ∇ pˆ K (y c ) ≤ 0 (4) The mean shift iterations are simply fixed-point iterations towards these stationary points. The volume that includes only the set of points that converge to the same mode is defined as the attraction basin of that mode. Recently, spectral clustering approaches based on the affinity and Laplacian matrices have been shown to be essentially related to kernel density estimation followed by an assignment for the class labels that minimizes the inter-cluster overlap and cluster entropy [16]. Particularly, considering spectral clustering with fixed size kernel density estimation in this context, one can easily observe that mean shift becomes an optimization problem, where the angle between cluster-means in the KIFS is to be maximized. Figure 1.a Figure 1.b Figure 1. Two Gaussian clusters with (a) balanced and (b) unbalanced a priori probabilities. Dashed lines represent the individual cluster densities, where the Bayes boundary is given by the intersection point. Solid line represents the overall data density, and the approximation to the Bayes boundary is given by the local minimum between the clusters. Motivated by this relationship between spectral clustering and kernel density estimation we propose a two-step spectral clustering algorithm: the first step determines the modes of the kernel density estimate with a fixed-point iterative procedure in a manner similar to the mean shift procedure. This procedure finds the minimal potential units for clustering, called partitions, which are naturally proposed by the density estimator. The second step employs spectral clustering on a reduced-size affinity matrix consisting of similarities between the M partitions determined in the first step.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages33 Page
-
File Size-