Submodular Hypergraphs: P-Laplacians, Cheeger Inequalities and Spectral Clustering

Total Page:16

File Type:pdf, Size:1020Kb

Submodular Hypergraphs: P-Laplacians, Cheeger Inequalities and Spectral Clustering Submodular Hypergraphs: p-Laplacians, Cheeger Inequalities and Spectral Clustering Pan Li 1 Olgica Milenkovic 1 Abstract approximations have been based on the assumption that each hyperedge cut has the same weight, in which case the We introduce submodular hypergraphs, a family underlying hypergraph is termed homogeneous. of hypergraphs that have different submodular weights associated with different cuts of hyper- However, in image segmentation, MAP inference on edges. Submodular hypergraphs arise in cluster- Markov random fields (Arora et al., 2012; Shanu et al., ing applications in which higher-order structures 2016), network motif studies (Li & Milenkovic, 2017; Ben- carry relevant information. For such hypergraphs, son et al., 2016; Tsourakakis et al., 2017) and rank learn- we define the notion of p-Laplacians and derive ing (Li & Milenkovic, 2017), higher order relations between corresponding nodal domain theorems and k-way vertices captured by hypergraphs are typically associated Cheeger inequalities. We conclude with the de- with different cut weights. In (Li & Milenkovic, 2017), Li scription of algorithms for computing the spectra and Milenkovic generalized the notion of hyperedge cut of 1- and 2-Laplacians that constitute the basis of weights by assuming that different hyperedge cuts have new spectral hypergraph clustering methods. different weights, and that consequently, each hyperedge is associated with a vector of weights rather than a single scalar weight. If the weights of the hyperedge cuts are sub- 1. Introduction modular, then one can use a graph with nonnegative edge Spectral clustering algorithms are designed to solve a relax- weights to efficiently approximate the hypergraph, provided ation of the graph cut problem based on graph Laplacians that the largest size of a hyperedge is a relatively small con- that capture pairwise dependencies between vertices, and stant. This property of the projected hypergraphs allows one produce sets with small conductance that represent clusters. to leverage spectral hypergraph clustering algorithms based Due to their scalability and provable performance guaran- on clique expansions with provable performance guaran- tees, spectral methods represent one of the most prevalent tees. Unfortunately, the clique expansion method in general graph clustering approaches (Chung, 1997; Ng et al., 2002). has two drawbacks: The spectral clustering algorithm for graphs used in the second step is merely quadratically opti- Many relevant problems in clustering, semisupervised learn- mal, while the projection step can cause a large distortion. ing and MAP inference (Zhou et al., 2007; Hein et al., 2013; Zhang et al., 2017) involve higher-order vertex dependen- To address the quadratic optimality issue in graph cluster- cies that require one to consider hypergraphs instead of ing, Amghibech (Amghibech, 2003) introduced the notion graphs. To address spectral hypergraph clustering problems, of p-Laplacians of graphs and derived Cheeger-type inequal- several approaches have been proposed that typically op- ities for the second smallest eigenvalue of a p-Laplacian, ¨ erate by first projecting the hypergraph onto a graph via p > 1, of a graph. These results motivated Buhler and clique expansion and then performing spectral clustering Hein’s work (Buhler¨ & Hein, 2009) on spectral clustering on graphs (Zhou et al., 2007). Clique expansion involves based on p-Laplacians that provided tighter approximations transforming a weighted hyperedge into a weighted clique of the Cheeger constant. Szlam and Bresson (Szlam & such that the graph cut weights approximately preserve the Bresson, 2010) showed that the 1-Laplacian allows one to cut weights of the hyperedge. Almost exclusively, these exactly compute the Cheeger constant, but at the cost of computational hardness (Chang, 2016). Very little is known 1Department of Electrical and Computer Engineering, about the use of p-Laplacians for hypergraph clustering and University of Illinois Urbana-Champaign, USA. Correspon- their spectral properties. dence to: Pan Li <[email protected]>, Olgica Milenkovic <[email protected]>. To address the clique expansion problem, Hein et al. (Hein th et al., 2013) introduced a clustering method for homoge- Proceedings of the 35 International Conference on Machine neous hypergraphs that avoids expansions and works di- Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). rectly with the total variation of homogeneous hypergraphs, Submodular Hypergraphs: p-Laplacians, Cheeger Inequalities and Spectral Clustering without investigating the spectral properties of the operator. mental results in the spectral theory of p-Laplacians, while The only other line of work trying to mitigate the projection Section4 introduces two algorithms for evaluating the sec- problem is due to Louis (Louis, 2015), who used a natural ond largest eigenvalue of p-Laplacians needed for 2-way extension of 2-Laplacians for homogeneous hypergraphs, clustering. Section5 presents experimental results. All derived quadratically-optimal Cheeger-type inequalities and proofs are relegated to the Supplementary Material. proposed a semidefinite programing (SDP) based algorithm whose complexity scales with the size of the largest hyper- 2. Mathematical Preliminaries edge in the hypergraph. A weighted graph G = (V; E; w) is an ordered pair of Our contributions are threefold. First, we introduce submod- two sets, the vertex set V = [N] = f1; 2;:::;Ng and the ular hypergraphs. Submodular hypergraphs allow one to edge set E ⊆ V × V , equipped with a weight function perform hyperedge partitionings that depend on the subsets w : E ! +. of elements involved in each part, thereby respecting higher- R order and other constraints in graphs (see (Li & Milenkovic, A cut C = (S; S¯) is a bipartition of the set V , while the 2017; Arora et al., 2012; Fix et al., 2013) for applications in cut-set (boundary) of the cut C is defined as the set of edges food network analysis, learning to rank, subspace clustering that have one endpoint in S and one in the complement of and image segmentation). Second, we define p-Laplacians S, S¯, i.e., @S = f(u; v) 2 E j u 2 S; v 2 S¯g. The weight P for submodular hypergraphs and generalize the correspond- of the cut induced by S equals vol(@S) = u2S; v2S¯ wuv, ing discrete nodal domain theorems (Tudisco & Hein, 2016; while the conductance of the cut is defined as Chang et al., 2017) and higher-order Cheeger inequalities. vol(@S) Even for homogeneous hypergraphs, nodal domain theo- c(S) = ; minf (S); (S¯)g rems were not known and only one low-order Cheeger in- vol vol equality for 2-Laplacians was established by Louis (Louis, P P where vol(S) = u2S µu, and µu = v2V wuv. When- 2015). An analytical obstacle in the development of such a ever clear from the context, for e = (uv), we write we p theory is the fact that -Laplacians of hypergraphs are oper- instead of wuv. Note that in this setting, the vertex weight sets of values ators that act on vectors and produce . Conse- values µu are determined based on the weights of edges we quently, operators and eigenvalues have to be defined in a incident to u. Clearly, one can use a different choice for set-theoretic manner. Third, based on the newly established these weights and make them independent from the edge spectral hypergraph theory, we propose two spectral cluster- weights, which is a generalization we pursue in the context ing methods that learn the second smallest eigenvalues of of submodular hypergraphs. The smallest conductance of 2 1 2 - and -Laplacians. The algorithm for -Laplacian eigen- any bipartition of a graph G is denoted by h2 and referred value computation is based on an SDP framework and can to as the Cheeger constant of the graph. provably achieve quadratic optimality with an O(pζ(E)) approximation constant, where ζ(E) denotes the size of A generalization of the Cheeger constant is the k−way the largest hyperedge in the hypergraph. The algorithm for Cheeger constant of a graph G. Let Pk denote the set 1-Laplacian eigenvalue computation is based on the inverse of all partitions of V into k-disjoint nonempty subsets, power method (IPM) (Hein & Buhler¨ , 2010) that only has i.e., Pk = f(S1;S2; :::; Sk)jSi ⊂ V; Si 6= ;;Si \ Sj = convergence guarantees. The key novelty of the IPM-based ;; 8i; j 2 [k]; i 6= jg. The k−way Cheeger constant is method is that the critical inner-loop optimization problem defined as of the IPM is efficiently solved by algorithms recently devel- hk = min max c(Si): oped for decomposable submodular minimization (Jegelka (S1;S2;:::;Sk)2Pk i2[k] et al., 2013; Ene & Nguyen, 2015; Li & Milenkovic, 2018). Although without performance guarantees, given that the Spectral graph theory provides a means for bounding the 1-Laplacian provides the tightest approximation guarantees, Cheeger constant using the (normalized) Laplacian ma- the IPM-based algorithm – as opposed to the clique expan- trix of the graph, defined as L = D − A and L = sion method (Li & Milenkovic, 2017) – performs very well I − D−1=2AD−1=2, respectively. Here, A stands for the ad- empirically even when the size of the hyperedges is large. jacency matrix of the graph, D denotes the diagonal degree This fact is illustrated on several UC Irvine machine learning matrix, while I stands for the identity matrix. The graph datasets available from (Asuncion & Newman, 2007). (g) Laplacian is an operator 42 (Chung, 1997) that satisfies The paper is organized as follows. Section2 contains an (g) X 2 overview of graph Laplacians and introduces the notion of hx; 42 (x)i = wuv(xu − xv) : submodular hypergraphs. The section also contains a de- (uv)2E scription of hypergraph Laplacians, and relevant concepts A generalization of the above operator termed the p- in submodular function theory. Section3 presents the funda- (g) Laplacian operator of a graph 4p was introduced by Submodular Hypergraphs: p-Laplacians, Cheeger Inequalities and Spectral Clustering P Amghibech in (Amghibech, 2003), where as y(S) = yv.
Recommended publications
  • Segment-Based Clustering of Hyperspectral Images Using Tree-Based Data Partitioning Structures
    algorithms Article Segment-Based Clustering of Hyperspectral Images Using Tree-Based Data Partitioning Structures Mohamed Ismail and Milica Orlandi´c* Department of Electronic Systems , NTNU: Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway; [email protected] * Correspondence: [email protected] Received: 2 October 2020; Accepted: 8 December 2020; Published: 10 December 2020 Abstract: Hyperspectral image classification has been increasingly used in the field of remote sensing. In this study, a new clustering framework for large-scale hyperspectral image (HSI) classification is proposed. The proposed four-step classification scheme explores how to effectively use the global spectral information and local spatial structure of hyperspectral data for HSI classification. Initially, multidimensional Watershed is used for pre-segmentation. Region-based hierarchical hyperspectral image segmentation is based on the construction of Binary partition trees (BPT). Each segmented region is modeled while using first-order parametric modelling, which is then followed by a region merging stage using HSI regional spectral properties in order to obtain a BPT representation. The tree is then pruned to obtain a more compact representation. In addition, principal component analysis (PCA) is utilized for HSI feature extraction, so that the extracted features are further incorporated into the BPT. Finally, an efficient variant of k-means clustering algorithm, called filtering algorithm, is deployed on the created BPT structure, producing the final cluster map. The proposed method is tested over eight publicly available hyperspectral scenes with ground truth data and it is further compared with other clustering frameworks. The extensive experimental analysis demonstrates the efficacy of the proposed method.
    [Show full text]
  • Image Segmentation Based on Multiscale Fast Spectral Clustering Chongyang Zhang, Guofeng Zhu, Minxin Chen, Hong Chen, Chenjian Wu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. X, XX 2018 1 Image Segmentation Based on Multiscale Fast Spectral Clustering Chongyang Zhang, Guofeng Zhu, Minxin Chen, Hong Chen, Chenjian Wu Abstract—In recent years, spectral clustering has become one In recent years, researchers have proposed various ap- of the most popular clustering algorithms for image segmenta- proaches to large-scale image segmentation. The approaches tion. However, it has restricted applicability to large-scale images are based on three following strategies: constructing a sparse due to its high computational complexity. In this paper, we first propose a novel algorithm called Fast Spectral Clustering similarity matrix, using Nystrom¨ approximation and using based on quad-tree decomposition. The algorithm focuses on representative points. The following approaches are based the spectral clustering at superpixel level and its computational on constructing a sparse similarity matrix. In 2000, Shi and 3 complexity is O(n log n) + O(m) + O(m 2 ); its memory cost Malik [18] constructed the similarity matrix of the image by is O(m), where n and m are the numbers of pixels and the using the k-nearest neighbor sparse strategy to reduce the superpixels of a image. Then we propose Multiscale Fast Spectral complexity of constructing the similarity matrix to O(n) and Clustering by improving Fast Spectral Clustering, which is based to reduce the complexity of eigen-decomposing the Laplacian on the hierarchical structure of the quad-tree. The computational 3 complexity of Multiscale Fast Spectral Clustering is O(n log n) matrix to O(n 2 ) by using the Lanczos algorithm.
    [Show full text]
  • Learning on Hypergraphs: Spectral Theory and Clustering
    Learning on Hypergraphs: Spectral Theory and Clustering Pan Li, Olgica Milenkovic Coordinated Science Laboratory University of Illinois at Urbana-Champaign March 12, 2019 Learning on Graphs Graphs are indispensable mathematical data models capturing pairwise interactions: k-nn network social network publication network Important learning on graphs problems: clustering (community detection), semi-supervised/active learning, representation learning (graph embedding) etc. Beyond Pairwise Relations A graph models pairwise relations. Recent work has shown that high-order relations can be significantly more informative: Examples include: Understanding the organization of networks (Benson, Gleich and Leskovec'16) Determining the topological connectivity between data points (Zhou, Huang, Sch}olkopf'07). Graphs with high-order relations can be modeled as hypergraphs (formally defined later). Meta-graphs, meta-paths in heterogeneous information networks. Algorithmic methods for analyzing high-order relations and learning problems are still under development. Beyond Pairwise Relations Functional units in social and biological networks. High-order network motifs: Motif (Benson’16) Microfauna Pelagic fishes Crabs & Benthic fishes Macroinvertebrates Algorithmic methods for analyzing high-order relations and learning problems are still under development. Beyond Pairwise Relations Functional units in social and biological networks. Meta-graphs, meta-paths in heterogeneous information networks. (Zhou, Yu, Han'11) Beyond Pairwise Relations Functional units in social and biological networks. Meta-graphs, meta-paths in heterogeneous information networks. Algorithmic methods for analyzing high-order relations and learning problems are still under development. Review of Graph Clustering: Notation and Terminology Graph Clustering Task: Cluster the vertices that are \densely" connected by edges. Graph Partitioning and Conductance A (weighted) graph G = (V ; E; w): for e 2 E, we is the weight.
    [Show full text]
  • Multiplex Conductance and Gossip Based Information Spreading in Multiplex Networks
    1 Multiplex Conductance and Gossip Based Information Spreading in Multiplex Networks Yufan Huang, Student Member, IEEE, and Huaiyu Dai, Fellow, IEEE Abstract—In this network era, not only people are connected, different networks are also coupled through various interconnections. This kind of network of networks, or multilayer networks, has attracted research interest recently, and many beneficial features have been discovered. However, quantitative study of information spreading in such networks is essentially lacking. Despite some existing results in single networks, the layer heterogeneity and complicated interconnections among the layers make the study of information spreading in this type of networks challenging. In this work, we study the information spreading time in multiplex networks, adopting the gossip (random-walk) based information spreading model. A new metric called multiplex conductance is defined based on the multiplex network structure and used to quantify the information spreading time in a general multiplex network in the idealized setting. Multiplex conductance is then evaluated for some interesting multiplex networks to facilitate understanding in this new area. Finally, the tradeoff between the information spreading efficiency improvement and the layer cost is examined to explain the user’s social behavior and motivate effective multiplex network designs. Index Terms—Information spreading, multiplex networks, gossip algorithm, multiplex conductance F 1 INTRODUCTION N the election year, one of the most important tasks Arguably, this somewhat simplified version of multilayer I for presidential candidates is to disseminate their words networks already captures many interesting multi-scale and and opinions to voters in a fast and efficient manner. multi-component features, and serves as a good starting The underlying research problem on information spreading point for our intended study.
    [Show full text]
  • Fast Approximate Spectral Clustering
    Fast Approximate Spectral Clustering Donghui Yan Ling Huang Michael I. Jordan University of California Intel Research Lab University of California Berkeley, CA 94720 Berkeley, CA 94704 Berkeley, CA 94720 [email protected] [email protected] [email protected] ABSTRACT variety of methods have been developed over the past several decades Spectral clustering refers to a flexible class of clustering proce- to solve clustering problems [15, 19]. A relatively recent area dures that can produce high-quality clusterings on small data sets of focus has been spectral clustering, a class of methods based but which has limited applicability to large-scale problems due to on eigendecompositions of affinity, dissimilarity or kernel matri- its computational complexity of O(n3) in general, with n the num- ces [20, 28, 31]. Whereas many clustering methods are strongly ber of data points. We extend the range of spectral clustering by tied to Euclidean geometry, making explicit or implicit assump- developing a general framework for fast approximate spectral clus- tions that clusters form convex regions in Euclidean space, spectral tering in which a distortion-minimizing local transformation is first methods are more flexible, capturing a wider range of geometries. applied to the data. This framework is based on a theoretical anal- They often yield superior empirical performance when compared ysis that provides a statistical characterization of the effect of local to competing algorithms such as k-means, and they have been suc- distortion on the mis-clustering rate. We develop two concrete in- cessfully deployed in numerous applications in areas such as com- stances of our general framework, one based on local k-means clus- puter vision, bioinformatics, and robotics.
    [Show full text]
  • An Algorithmic Introduction to Clustering
    AN ALGORITHMIC INTRODUCTION TO CLUSTERING APREPRINT Bernardo Gonzalez Department of Computer Science and Engineering UCSC [email protected] June 11, 2020 The purpose of this document is to provide an easy introductory guide to clustering algorithms. Basic knowledge and exposure to probability (random variables, conditional probability, Bayes’ theorem, independence, Gaussian distribution), matrix calculus (matrix and vector derivatives), linear algebra and analysis of algorithm (specifically, time complexity analysis) is assumed. Starting with Gaussian Mixture Models (GMM), this guide will visit different algorithms like the well-known k-means, DBSCAN and Spectral Clustering (SC) algorithms, in a connected and hopefully understandable way for the reader. The first three sections (Introduction, GMM and k-means) are based on [1]. The fourth section (SC) is based on [2] and [5]. Fifth section (DBSCAN) is based on [6]. Sixth section (Mean Shift) is, to the best of author’s knowledge, original work. Seventh section is dedicated to conclusions and future work. Traditionally, these five algorithms are considered completely unrelated and they are considered members of different families of clustering algorithms: • Model-based algorithms: the data are viewed as coming from a mixture of probability distributions, each of which represents a different cluster. A Gaussian Mixture Model is considered a member of this family • Centroid-based algorithms: any data point in a cluster is represented by the central vector of that cluster, which need not be a part of the dataset taken. k-means is considered a member of this family • Graph-based algorithms: the data is represented using a graph, and the clustering procedure leverage Graph theory tools to create a partition of this graph.
    [Show full text]
  • Spectral Clustering: a Tutorial for the 2010’S
    Spectral Clustering: a Tutorial for the 2010’s Marina Meila∗ Department of Statistics, University of Washington September 22, 2016 Abstract Spectral clustering is a family of methods to find K clusters using the eigenvectors of a matrix. Typically, this matrix is derived from a set of pairwise similarities Sij between the points to be clustered. This task is called similarity based clustering, graph clustering, or clustering of diadic data. One remarkable advantage of spectral clustering is its ability to cluster “points” which are not necessarily vectors, and to use for this a“similarity”, which is less restric- tive than a distance. A second advantage of spectral clustering is its flexibility; it can find clusters of arbitrary shapes, under realistic separations. This chapter introduces the similarity based clustering paradigm, describes the al- gorithms used, and sets the foundations for understanding these algorithms. Practical aspects, such as obtaining the similarities are also discussed. ∗ This tutorial appeared in “Handbook of Cluster ANalysis” by Christian Hennig, Marina Meila, Fionn Murthagh and Roberto Rocci (eds.), CRC Press, 2015. 1 Contents 1 Similarity based clustering. Definitions and criteria 3 1.1 What is similarity based clustering? . ...... 3 1.2 Similarity based clustering and cuts in graphs . ......... 3 1.3 The Laplacian and other matrices of spectral clustering ........... 4 1.4 Four bird’s eye views of spectral clustering . ......... 6 2 Spectral clustering algorithms 7 3 Understanding the spectral clustering algorithms 11 3.1 Random walk/Markov chain view . 11 3.2 Spectral clustering as finding a small balanced cut in ........... 13 G 3.3 Spectral clustering as finding smooth embeddings .
    [Show full text]
  • Rumour Spreading and Graph Conductance∗
    Rumour spreading and graph conductance∗ Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi fchierichetti,lattanzi,[email protected] Dipartimento di Informatica Sapienza Universit`adi Roma October 12, 2009 Abstract if its completion time is poly-logarithmic in the size of We show that if a connected graph with n nodes has the network regardless of the source, and that it is slow conductance φ then rumour spreading, also known as otherwise. randomized broadcast, successfully broadcasts a mes- Randomized broadcast has been intensely investi- sage within O(log4 n/φ6) many steps, with high proba- gated (see the related-work section). Our long term bility, using the PUSH-PULL strategy. An interesting goal is to characterize a set of necessary and/or suffi- feature of our approach is that it draws a connection be- cient conditions for rumour spreading to be fast in a tween rumour spreading and the spectral sparsification given network. In this work, we provide a very general procedure of Spielman and Teng [23]. sufficient condition{ high conductance. Our main moti- vation comes from the study of social networks. Loosely 1 Introduction stated, we are looking after a theorem of the form \Ru- mour spreading is fast in social networks". Our result is Rumour spreading, also known as randomized broadcast a good step in this direction because there are reasons or randomized gossip (all terms that will be used as to believe that social networks have high conductance. synonyms throughout the paper), refers to the following This is certainly the case for preferential attachment distributed algorithm. Starting with one source node models such as that of [18].
    [Show full text]
  • Absorbing State 34, 36 Adaptive Boolean Networks 78 Adaptive Chemical Network 73 Adaptive Coupled Map Lattices 66 Adaptive Epide
    239 Index a b absorbing state 34, 36 b-value 107, 109, 112, 119 adaptive Boolean networks 78 Bak–Sneppen model 74, 75 adaptive chemical network 73 benchmark 216, 217, 234 adaptive coupled map lattices 66 Bethe–Peierls approach 225 adaptive epidemiological network betweenness of a node 68 95 bill-tracking website 6 adaptive network of coupled biodiversity 46 oscillators 83 biological evolution 73 adaptive networks 63–66, 70–72, bipartite networks 211 74, 83, 90, 98, 100, 102 bipartite structure 216 adaptive rewiring 91, 92 bipartition 218, 220 adaptive SIS model 96 bipartitioning problem 219 adjacency matrix 9, 66, 206, 207, birth-death processes 31, 37 209–211, 217, 223 bisection problem 232 aftershock 108, 113, 121, bistability 27 125–129, 131, 134, 135, 137 bivariate analysis 164 aftershock magnitudes 127 bivariate measures 162 aftershock sequence 121, 123 bivariate nonlinear approaches aftershock sequences 136 163 agent-based models 73 bivariate time-series analysis agent-based simulation 172, 173, 178 frameworks 3 block-structure 201, 203, 217, air transportation network 2, 10 222, 234, 235 anti-epileptic drugs 159, 163 Boolean networks 65, 76, 79 assortative 67 bounded rationality 25 attractor 164 box-counting technique 112, 113 autoregressive modeling 161, brain 161 172, 175 Brownian motion 6, 7 avalanche size 139 burst-firing 169, 170 average shortest path length 167 bursting 166, 167, 170 Reviews of Nonlinear Dynamics and Complexity. Edited by Heinz Georg Schuster Copyright c 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN:
    [Show full text]
  • Spatially-Encouraged Spectral Clustering: a Technique for Blending Map Typologies and Regionalization
    Spatially-encouraged spectral clustering: a technique for blending map typologies and regionalization Levi John Wolfa aSchool of Geographical Sciences, University of Bristol, United Kingdom. ARTICLE HISTORY Compiled May 19, 2021 ABSTRACT Clustering is a central concern in geographic data science and reflects a large, active domain of research. In spatial clustering, it is often challenging to balance two kinds of \goodness of fit:” clusters should have \feature" homogeneity, in that they aim to represent one \type" of observation, and also \geographic" coherence, in that they aim to represent some detected geographical \place." This divides \map ty- pologization" studies, common in geodemographics, from \regionalization" studies, common in spatial optimization and statistics. Recent attempts to simultaneously typologize and regionalize data into clusters with both feature homogeneity and geo- graphic coherence have faced conceptual and computational challenges. Fortunately, new work on spectral clustering can address both regionalization and typologization tasks within the same framework. This research develops a novel kernel combination method for use within spectral clustering that allows analysts to blend smoothly be- tween feature homogeneity and geographic coherence. I explore the formal properties of two kernel combination methods and recommend multiplicative kernel combina- tion with spectral clustering. Altogether, spatially-encouraged spectral clustering is shown as a novel kernel combination clustering method that can address both re- gionalization and typologization tasks in order to reveal the geographies latent in spatially-structured data. KEYWORDS clustering; geodemographics; spatial analysis; spectral clustering 1. Introduction Clustering is a critical concern in geography. In geographic data science, unsupervised learning has emerged as a remarkably effective collection of approaches for describ- ing and exploring the latent structure in data (Hastie et al.
    [Show full text]
  • Semi-Supervised Spectral Clustering for Image Set Classification
    Semi-supervised Spectral Clustering for Image Set Classification Arif Mahmood, Ajmal Mian, Robyn Owens School of Computer Science and Software Engineering, The University of Western Australia farif.mahmood, ajmal.mian, robyn.owensg@ uwa.edu.au Abstract We present an image set classification algorithm based on unsupervised clustering of labeled training and unla- 0.6 beled test data where labels are only used in the stopping 0.4 criterion. The probability distribution of each class over 2 the set of clusters is used to define a true set based simi- PC larity measure. To this end, we propose an iterative sparse 0.2 spectral clustering algorithm. In each iteration, a proxim- ity matrix is efficiently recomputed to better represent the 0 local subspace structure. Initial clusters capture the global -0.2 data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. -0.5 Image sets are compactly represented with multiple Grass- -0.2 mannian manifolds which are subsequently embedded in 0 0 Euclidean space with the proposed spectral clustering al- 0.2 gorithm. We also propose an efficient eigenvector solver 0.5 which not only reduces the computational cost of spectral Figure 1. Projection on the top 3 principal directions of 5 subjects clustering by many folds but also improves the clustering (in different colors) from the CMU Mobo face dataset. The natural quality and final classification results. Experiments on five data clusters do not follow the class labels and the underlying face standard datasets and comparison with seven existing tech- subspaces are neither independent nor disjoint.
    [Show full text]
  • Fast Approximate Spectral Clustering
    Fast Approximate Spectral Clustering Donghui Yan Ling Huang Michael I. Jordan DepartmentofStatistics IntelResearchLab Departmentsof EECS and Statistics University of California Berkeley, CA 94704 University of California Berkeley, CA 94720 Berkeley, CA 94720 Technical Report Abstract Spectral clustering refers to a flexible class of clustering procedures that can produce high-quality clus- terings on small data sets but which has limited applicability to large-scale problems due to its computa- tional complexity of O(n3) in general, with n the number of data points. We extend the range of spectral clustering by developing a general framework for fast approximate spectral clustering in which a distortion- minimizing local transformation is first applied to the data. This framework is based on a theoretical analy- sis that provides a statistical characterization of the effect of local distortion on the mis-clustering rate. We develop two concrete instances of our general framework, one based on local k-means clustering (KASP) and one based on random projection trees (RASP). Extensive experiments show that these algorithms can achieve significant speedups with little degradation in clustering accuracy. Specifically, our algorithms outperform k-means by a large margin in terms of accuracy, and run several times faster than approximate spectral clustering based on the Nystr¨om method, with comparable accuracy and significantly smaller mem- ory footprint. Remarkably, our algorithms make it possible for a single machine to spectral cluster data sets with a million observations within several minutes. 1 Introduction Clustering is a problem of primary importance in data mining, statistical machine learning and scientific discovery. An enormous variety of methods have been developed over the past several decades to solve clus- tering problems [16, 21].
    [Show full text]