Robust and Efficient Computation of Eigenvectors in a Generalized

Total Page:16

File Type:pdf, Size:1020Kb

Robust and Efficient Computation of Eigenvectors in a Generalized Robust and Efficient Computation of Eigenvectors in a Generalized Spectral Method for Constrained Clustering Chengming Jiang Huiqing Xie Zhaojun Bai University of California, Davis East China University of University of California, Davis [email protected] Science and Technology [email protected] [email protected] Abstract 1 INTRODUCTION FAST-GE is a generalized spectral method Clustering is one of the most important techniques for constrained clustering [Cucuringu et al., for statistical data analysis, with applications rang- AISTATS 2016]. It incorporates the must- ing from machine learning, pattern recognition, im- link and cannot-link constraints into two age analysis, bioinformatics to computer graphics. Laplacian matrices and then minimizes a It attempts to categorize or group date into clus- Rayleigh quotient via solving a generalized ters on the basis of their similarity. Normalized eigenproblem, and is considered to be sim- Cut [Shi and Malik, 2000] and Spectral Clustering ple and scalable. However, there are two [Ng et al., 2002] are two popular algorithms. unsolved issues. Theoretically, since both Laplacian matrices are positive semi-definite Constrained clustering refers to the clustering with and the corresponding pencil is singular, it a prior domain knowledge of grouping information. is not proven whether the minimum of the Here relatively few must-link (ML) or cannot-link Rayleigh quotient exists and is equivalent to (CL) constraints are available to specify regions an eigenproblem. Computationally, the lo- that must be grouped in the same partition or be cally optimal block preconditioned conjugate separated into different ones [Wagstaff et al., 2001]. gradient (LOBPCG) method is not designed With constraints, the quality of clustering could for solving the eigenproblem of a singular be improved dramatically. In the past few years, pencil. In fact, to the best of our knowl- constrained clustering has attracted a lot of at- edge, there is no existing eigensolver that is tentions in many applications such as transductive immediately applicable. In this paper, we learning [Chapelle et al., 2006, Joachims, 2003], com- provide solutions to these two critical issues. munity detection [Eaton and Mansbach, 2012, We prove a generalization of Courant-Fischer Ma et al., 2010] and image segmentation variational principle for the Laplacian singu- [Chew and Cahill, 2015, Cour et al., 2007, lar pencil. We propose a regularization for Cucuringu et al., 2016, Eriksson et al., 2011, the pencil so that LOBPCG is applicable. Wang et al., 2014, Xu et al., 2009, Yu and Shi, 2004]. We demonstrate the robustness and efficiency Existing constrained clustering methods based on of proposed solutions for constrained image spectral graph theory can be organized in two segmentation. The proposed theoretical and classes. One class is to impose the constraints computational solutions can be applied to on the indicator vectors (eigenspace) explicitly, eigenproblems of positive semi-definite pen- namely, the constraints are either encoded in a cils arising in other machine learning algo- linear form [Cour et al., 2007, Eriksson et al., 2011, rithms, such as generalized linear discrim- Xu et al., 2009, Yu and Shi, 2004] or in a bilinear inant analysis in dimension reduction and form [Wang et al., 2014]. The other class of the multisurface classification via eigenvectors. methods implicitly incorporates the constraints into Laplacians. The semi-supervised normalized cut Proceedings of the 20th International Conference on Artifi- [Chew and Cahill, 2015] casts the constraints as a low- cial Intelligence and Statistics (AISTATS) 2017, Fort Laud- rank matrix to the Laplacian of the data graph. In a erdale, Florida, USA. JMLR: W&CP volume 54. Copy- recent proposed generalized spectral method (FAST- right 2017 by the author(s). GE) [Cucuringu et al., 2016], the ML and CL con- Robust and Efficient Computation of Eigenvectors for Constrained Clustering straints are incorporated by two Laplacians LG and We note that the essence of the proposed theoret- LH . The clustering is realized by solving the opti- ical and computational solutions in this paper is mization problem about mathematically rigorous and computationally effective treatments of large sparse symmetric posi- xT L x inf G ; (1) tive semi-definite pencils. The proposed results in n T x2 x LH x this paper could be applied to eigenproblems aris- T R x LH x>0 ing in other machine learning techniques, such as which subsequently is converted to the problem of generalized linear discriminant analysis in dimen- computing a few eigenvectors of the generalized eigen- sion reduction [He et al., 2005, Park and Park, 2008, problem Zhu and Huang, 2014] and multisurface proximal sup- port vector machine classification via generalized LGx = λLH x: (2) eigenvectors [Mangasarian and Wild, 2006]. It is shown that the FAST-GE algorithm works in nearly-linear time and provides some theoret- 2 PRELIMINARIES ical guarantees for the quality of the clusters [Cucuringu et al., 2016]. FAST-GE demonstrates A weighted undirected graph G is represented by a pair its superior quality to the other spectral ap- (V; W ), where V = fv ; v ; : : : ; v g denotes the set of proaches, namely CSP [Wang et al., 2014] and COSf 1 2 n vertices, and W = (w ) is a symmetric weight matrix, [Rangapuram and Hein, 2012]. ij such that wij > 0 and wii = 0 for all 1 6 i; j 6 n. The However, there are two critical unsolved issues as- pair (vi; vj) is an edge of G iff wij > 0. The degree di sociated with the FAST-GE algorithm. The first of a vertex vi is the sum of the weights of the edges one is theoretical. Since both Laplacians LG and adjacent to vi: n LH are symmetric positive semi-definite and the pen- X di = wij: cil LG − λLH is singular, i.e., det(LG − λLH ) ≡ 0 for all λ. The Courant-Fischer variational principle i=j [Golub and Van Loan, 2012, sec.8.1.1] is not applica- The degree matrix is D = diag(d1; d2; :::dn). The ble. Consequently, it is not proven whether the in- Laplacian L of G is defined by L = D − W and has fimum of the Rayleigh quotient (1) is equivalent to the following well-known properties: T 1 P 2 the smallest eigenvalue of problem (2). The second (a) x Lx = 2 i;j wij(xi − xj) ; issue is computational. LOBPCG is not designed for (b) L 0 if wij > 0 for all i; j; the generalized eigenproblem of a singular pencil. In (c) L · 1 = 0; fact, to the best of our knowledge, there is no exist- (d) Let λi denote the ith smallest eigenvalue of L. If ing eigensolver that is applicable to the large sparse the underlying graph of G is connected, then 0 = λ1 < eigenproblem (2). λ2 6 λ3 6 ::: 6 λn, and dim(N (L)) = 1, where N (L) denotes the nullspace of L. In this paper, we address these two critical issues. Theoretically, we first derive a canonical form of the Let A be a subset of vertices V and A¯ = V=A, the singular pencil L − λL and show the existence of quantity G H X the finite real eigenvalues. Then we generalize the cutG(A) = wij (3) Courant-Fischer variational principle to the singular vi2A;vj 2A¯ pencil LG − λLH and prove that the infimum of the is called the cut of A on graph G. The volume of A is Rayleigh quotient (1) can be replaced by the minimum, the sum of the weights of all edges adjacent to vertiecs namely, the existence of minima is guaranteed. Based in A: on these theoretical results, we can claim that the opti- n X X mization problem (1) is indeed equivalent to the prob- vol(A) = wij: lem of finding the smallest finite eigenvalue and as- vi2A j=1 sociated eigenvectors of the generalized eigenproblem It can be shown, see for example [Gallier, 2013], that (2). Computationally, we propose a regularization to xT Lx transform the singular pencil LG − λLH to a positive cut (A) = ; (4) definite pencil K−σM, where K and M are symmetric G (a − b)2 and M is positive definite. Consequently, we can di- where x is the indicator vector such that its i element rectly apply LOBPCG and other eigensolvers, such as x = a if v 2 A, and x = b if v 2= A, and a; b are two Lanczos method in ARPACK [Lehoucq et al., 1998]. i i i i distinct real numbers. We demonstrate the robustness and efficiency of the proposed approach for constrained segmentations of a For a given weighted graph G = (V; W ), the k-way set of large size images. partitioning is to find a partition (A1;:::;Ak) of V , Chengming Jiang, Huiqing Xie, Zhaojun Bai such that the edges between different subsets have different Vi form the CL constraints, the objective of very low weight and the edges within a subset have FAST-GE is to find a partition (A1;A2;:::;Ak) of V very high weight. The Normalized Cut (Ncut) method such that Vi ⊆ Ai, and the edges between different [Shi and Malik, 2000] is a popular method for uncon- subsets have very low weight, the edges within a sub- strained partitioning. For the given weighted graph G, set have very high weight. the objective of Ncut for a 2-way partition (A; A¯) is to Let cut (A ) be the cut of the subset A on G minimize the quantity GD p p D defined in (3), and we introduce a graph cut(A) cut(A¯) Ncut(A) = + : (5) G = (V; W ); (9) vol(A) vol(A¯) M M Pk It is shown that where the weight matrix WM = `=1 WM` : The en- tries of WM are defined as follows: if vi and vj are T ` x Lx in the same V , then W (i; j) = d d =(d d ); min Ncut(A) = min (6) ` M` i j min max x T A x Dx where di and dj are the degrees of vi and vj in G , d = min d and d = max d .
Recommended publications
  • Segment-Based Clustering of Hyperspectral Images Using Tree-Based Data Partitioning Structures
    algorithms Article Segment-Based Clustering of Hyperspectral Images Using Tree-Based Data Partitioning Structures Mohamed Ismail and Milica Orlandi´c* Department of Electronic Systems , NTNU: Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway; [email protected] * Correspondence: [email protected] Received: 2 October 2020; Accepted: 8 December 2020; Published: 10 December 2020 Abstract: Hyperspectral image classification has been increasingly used in the field of remote sensing. In this study, a new clustering framework for large-scale hyperspectral image (HSI) classification is proposed. The proposed four-step classification scheme explores how to effectively use the global spectral information and local spatial structure of hyperspectral data for HSI classification. Initially, multidimensional Watershed is used for pre-segmentation. Region-based hierarchical hyperspectral image segmentation is based on the construction of Binary partition trees (BPT). Each segmented region is modeled while using first-order parametric modelling, which is then followed by a region merging stage using HSI regional spectral properties in order to obtain a BPT representation. The tree is then pruned to obtain a more compact representation. In addition, principal component analysis (PCA) is utilized for HSI feature extraction, so that the extracted features are further incorporated into the BPT. Finally, an efficient variant of k-means clustering algorithm, called filtering algorithm, is deployed on the created BPT structure, producing the final cluster map. The proposed method is tested over eight publicly available hyperspectral scenes with ground truth data and it is further compared with other clustering frameworks. The extensive experimental analysis demonstrates the efficacy of the proposed method.
    [Show full text]
  • Accelerating the LOBPCG Method on Gpus Using a Blocked Sparse Matrix Vector Product
    Accelerating the LOBPCG method on GPUs using a blocked Sparse Matrix Vector Product Hartwig Anzt and Stanimire Tomov and Jack Dongarra Innovative Computing Lab University of Tennessee Knoxville, USA Email: [email protected], [email protected], [email protected] Abstract— the computing power of today’s supercomputers, often accel- erated by coprocessors like graphics processing units (GPUs), This paper presents a heterogeneous CPU-GPU algorithm design and optimized implementation for an entire sparse iter- becomes challenging. ative eigensolver – the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) – starting from low-level GPU While there exist numerous efforts to adapt iterative lin- data structures and kernels to the higher-level algorithmic choices ear solvers like Krylov subspace methods to coprocessor and overall heterogeneous design. Most notably, the eigensolver technology, sparse eigensolvers have so far remained out- leverages the high-performance of a new GPU kernel developed side the main focus. A possible explanation is that many for the simultaneous multiplication of a sparse matrix and a of those combine sparse and dense linear algebra routines, set of vectors (SpMM). This is a building block that serves which makes porting them to accelerators more difficult. Aside as a backbone for not only block-Krylov, but also for other from the power method, algorithms based on the Krylov methods relying on blocking for acceleration in general. The subspace idea are among the most commonly used general heterogeneous LOBPCG developed here reveals the potential of eigensolvers [1]. When targeting symmetric positive definite this type of eigensolver by highly optimizing all of its components, eigenvalue problems, the recently developed Locally Optimal and can be viewed as a benchmark for other SpMM-dependent applications.
    [Show full text]
  • Image Segmentation Based on Multiscale Fast Spectral Clustering Chongyang Zhang, Guofeng Zhu, Minxin Chen, Hong Chen, Chenjian Wu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. X, XX 2018 1 Image Segmentation Based on Multiscale Fast Spectral Clustering Chongyang Zhang, Guofeng Zhu, Minxin Chen, Hong Chen, Chenjian Wu Abstract—In recent years, spectral clustering has become one In recent years, researchers have proposed various ap- of the most popular clustering algorithms for image segmenta- proaches to large-scale image segmentation. The approaches tion. However, it has restricted applicability to large-scale images are based on three following strategies: constructing a sparse due to its high computational complexity. In this paper, we first propose a novel algorithm called Fast Spectral Clustering similarity matrix, using Nystrom¨ approximation and using based on quad-tree decomposition. The algorithm focuses on representative points. The following approaches are based the spectral clustering at superpixel level and its computational on constructing a sparse similarity matrix. In 2000, Shi and 3 complexity is O(n log n) + O(m) + O(m 2 ); its memory cost Malik [18] constructed the similarity matrix of the image by is O(m), where n and m are the numbers of pixels and the using the k-nearest neighbor sparse strategy to reduce the superpixels of a image. Then we propose Multiscale Fast Spectral complexity of constructing the similarity matrix to O(n) and Clustering by improving Fast Spectral Clustering, which is based to reduce the complexity of eigen-decomposing the Laplacian on the hierarchical structure of the quad-tree. The computational 3 complexity of Multiscale Fast Spectral Clustering is O(n log n) matrix to O(n 2 ) by using the Lanczos algorithm.
    [Show full text]
  • Learning on Hypergraphs: Spectral Theory and Clustering
    Learning on Hypergraphs: Spectral Theory and Clustering Pan Li, Olgica Milenkovic Coordinated Science Laboratory University of Illinois at Urbana-Champaign March 12, 2019 Learning on Graphs Graphs are indispensable mathematical data models capturing pairwise interactions: k-nn network social network publication network Important learning on graphs problems: clustering (community detection), semi-supervised/active learning, representation learning (graph embedding) etc. Beyond Pairwise Relations A graph models pairwise relations. Recent work has shown that high-order relations can be significantly more informative: Examples include: Understanding the organization of networks (Benson, Gleich and Leskovec'16) Determining the topological connectivity between data points (Zhou, Huang, Sch}olkopf'07). Graphs with high-order relations can be modeled as hypergraphs (formally defined later). Meta-graphs, meta-paths in heterogeneous information networks. Algorithmic methods for analyzing high-order relations and learning problems are still under development. Beyond Pairwise Relations Functional units in social and biological networks. High-order network motifs: Motif (Benson’16) Microfauna Pelagic fishes Crabs & Benthic fishes Macroinvertebrates Algorithmic methods for analyzing high-order relations and learning problems are still under development. Beyond Pairwise Relations Functional units in social and biological networks. Meta-graphs, meta-paths in heterogeneous information networks. (Zhou, Yu, Han'11) Beyond Pairwise Relations Functional units in social and biological networks. Meta-graphs, meta-paths in heterogeneous information networks. Algorithmic methods for analyzing high-order relations and learning problems are still under development. Review of Graph Clustering: Notation and Terminology Graph Clustering Task: Cluster the vertices that are \densely" connected by edges. Graph Partitioning and Conductance A (weighted) graph G = (V ; E; w): for e 2 E, we is the weight.
    [Show full text]
  • Fast Approximate Spectral Clustering
    Fast Approximate Spectral Clustering Donghui Yan Ling Huang Michael I. Jordan University of California Intel Research Lab University of California Berkeley, CA 94720 Berkeley, CA 94704 Berkeley, CA 94720 [email protected] [email protected] [email protected] ABSTRACT variety of methods have been developed over the past several decades Spectral clustering refers to a flexible class of clustering proce- to solve clustering problems [15, 19]. A relatively recent area dures that can produce high-quality clusterings on small data sets of focus has been spectral clustering, a class of methods based but which has limited applicability to large-scale problems due to on eigendecompositions of affinity, dissimilarity or kernel matri- its computational complexity of O(n3) in general, with n the num- ces [20, 28, 31]. Whereas many clustering methods are strongly ber of data points. We extend the range of spectral clustering by tied to Euclidean geometry, making explicit or implicit assump- developing a general framework for fast approximate spectral clus- tions that clusters form convex regions in Euclidean space, spectral tering in which a distortion-minimizing local transformation is first methods are more flexible, capturing a wider range of geometries. applied to the data. This framework is based on a theoretical anal- They often yield superior empirical performance when compared ysis that provides a statistical characterization of the effect of local to competing algorithms such as k-means, and they have been suc- distortion on the mis-clustering rate. We develop two concrete in- cessfully deployed in numerous applications in areas such as com- stances of our general framework, one based on local k-means clus- puter vision, bioinformatics, and robotics.
    [Show full text]
  • An Algorithmic Introduction to Clustering
    AN ALGORITHMIC INTRODUCTION TO CLUSTERING APREPRINT Bernardo Gonzalez Department of Computer Science and Engineering UCSC [email protected] June 11, 2020 The purpose of this document is to provide an easy introductory guide to clustering algorithms. Basic knowledge and exposure to probability (random variables, conditional probability, Bayes’ theorem, independence, Gaussian distribution), matrix calculus (matrix and vector derivatives), linear algebra and analysis of algorithm (specifically, time complexity analysis) is assumed. Starting with Gaussian Mixture Models (GMM), this guide will visit different algorithms like the well-known k-means, DBSCAN and Spectral Clustering (SC) algorithms, in a connected and hopefully understandable way for the reader. The first three sections (Introduction, GMM and k-means) are based on [1]. The fourth section (SC) is based on [2] and [5]. Fifth section (DBSCAN) is based on [6]. Sixth section (Mean Shift) is, to the best of author’s knowledge, original work. Seventh section is dedicated to conclusions and future work. Traditionally, these five algorithms are considered completely unrelated and they are considered members of different families of clustering algorithms: • Model-based algorithms: the data are viewed as coming from a mixture of probability distributions, each of which represents a different cluster. A Gaussian Mixture Model is considered a member of this family • Centroid-based algorithms: any data point in a cluster is represented by the central vector of that cluster, which need not be a part of the dataset taken. k-means is considered a member of this family • Graph-based algorithms: the data is represented using a graph, and the clustering procedure leverage Graph theory tools to create a partition of this graph.
    [Show full text]
  • Spectral Clustering: a Tutorial for the 2010’S
    Spectral Clustering: a Tutorial for the 2010’s Marina Meila∗ Department of Statistics, University of Washington September 22, 2016 Abstract Spectral clustering is a family of methods to find K clusters using the eigenvectors of a matrix. Typically, this matrix is derived from a set of pairwise similarities Sij between the points to be clustered. This task is called similarity based clustering, graph clustering, or clustering of diadic data. One remarkable advantage of spectral clustering is its ability to cluster “points” which are not necessarily vectors, and to use for this a“similarity”, which is less restric- tive than a distance. A second advantage of spectral clustering is its flexibility; it can find clusters of arbitrary shapes, under realistic separations. This chapter introduces the similarity based clustering paradigm, describes the al- gorithms used, and sets the foundations for understanding these algorithms. Practical aspects, such as obtaining the similarities are also discussed. ∗ This tutorial appeared in “Handbook of Cluster ANalysis” by Christian Hennig, Marina Meila, Fionn Murthagh and Roberto Rocci (eds.), CRC Press, 2015. 1 Contents 1 Similarity based clustering. Definitions and criteria 3 1.1 What is similarity based clustering? . ...... 3 1.2 Similarity based clustering and cuts in graphs . ......... 3 1.3 The Laplacian and other matrices of spectral clustering ........... 4 1.4 Four bird’s eye views of spectral clustering . ......... 6 2 Spectral clustering algorithms 7 3 Understanding the spectral clustering algorithms 11 3.1 Random walk/Markov chain view . 11 3.2 Spectral clustering as finding a small balanced cut in ........... 13 G 3.3 Spectral clustering as finding smooth embeddings .
    [Show full text]
  • Spatially-Encouraged Spectral Clustering: a Technique for Blending Map Typologies and Regionalization
    Spatially-encouraged spectral clustering: a technique for blending map typologies and regionalization Levi John Wolfa aSchool of Geographical Sciences, University of Bristol, United Kingdom. ARTICLE HISTORY Compiled May 19, 2021 ABSTRACT Clustering is a central concern in geographic data science and reflects a large, active domain of research. In spatial clustering, it is often challenging to balance two kinds of \goodness of fit:” clusters should have \feature" homogeneity, in that they aim to represent one \type" of observation, and also \geographic" coherence, in that they aim to represent some detected geographical \place." This divides \map ty- pologization" studies, common in geodemographics, from \regionalization" studies, common in spatial optimization and statistics. Recent attempts to simultaneously typologize and regionalize data into clusters with both feature homogeneity and geo- graphic coherence have faced conceptual and computational challenges. Fortunately, new work on spectral clustering can address both regionalization and typologization tasks within the same framework. This research develops a novel kernel combination method for use within spectral clustering that allows analysts to blend smoothly be- tween feature homogeneity and geographic coherence. I explore the formal properties of two kernel combination methods and recommend multiplicative kernel combina- tion with spectral clustering. Altogether, spatially-encouraged spectral clustering is shown as a novel kernel combination clustering method that can address both re- gionalization and typologization tasks in order to reveal the geographies latent in spatially-structured data. KEYWORDS clustering; geodemographics; spatial analysis; spectral clustering 1. Introduction Clustering is a critical concern in geography. In geographic data science, unsupervised learning has emerged as a remarkably effective collection of approaches for describ- ing and exploring the latent structure in data (Hastie et al.
    [Show full text]
  • Semi-Supervised Spectral Clustering for Image Set Classification
    Semi-supervised Spectral Clustering for Image Set Classification Arif Mahmood, Ajmal Mian, Robyn Owens School of Computer Science and Software Engineering, The University of Western Australia farif.mahmood, ajmal.mian, robyn.owensg@ uwa.edu.au Abstract We present an image set classification algorithm based on unsupervised clustering of labeled training and unla- 0.6 beled test data where labels are only used in the stopping 0.4 criterion. The probability distribution of each class over 2 the set of clusters is used to define a true set based simi- PC larity measure. To this end, we propose an iterative sparse 0.2 spectral clustering algorithm. In each iteration, a proxim- ity matrix is efficiently recomputed to better represent the 0 local subspace structure. Initial clusters capture the global -0.2 data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. -0.5 Image sets are compactly represented with multiple Grass- -0.2 mannian manifolds which are subsequently embedded in 0 0 Euclidean space with the proposed spectral clustering al- 0.2 gorithm. We also propose an efficient eigenvector solver 0.5 which not only reduces the computational cost of spectral Figure 1. Projection on the top 3 principal directions of 5 subjects clustering by many folds but also improves the clustering (in different colors) from the CMU Mobo face dataset. The natural quality and final classification results. Experiments on five data clusters do not follow the class labels and the underlying face standard datasets and comparison with seven existing tech- subspaces are neither independent nor disjoint.
    [Show full text]
  • Solving Symmetric Semi-Definite (Ill-Conditioned)
    Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016 Symmetric definite generalized eigenvalue problem I Symmetric definite generalized eigenvalue problem Ax = λBx where AT = A and BT = B > 0 I Eigen-decomposition AX = BXΛ where Λ = diag(λ1; λ2; : : : ; λn) X = (x1; x2; : : : ; xn) XT BX = I: I Assume λ1 ≤ λ2 ≤ · · · ≤ λn LAPACK solvers I LAPACK routines xSYGV, xSYGVD, xSYGVX are based on the following algorithm (Wilkinson'65): 1. compute the Cholesky factorization B = GGT 2. compute C = G−1AG−T 3. compute symmetric eigen-decomposition QT CQ = Λ 4. set X = G−T Q I xSYGV[D,X] could be numerically unstable if B is ill-conditioned: −1 jλbi − λij . p(n)(kB k2kAk2 + cond(B)jλbij) · and −1 1=2 kB k2kAk2(cond(B)) + cond(B)jλbij θ(xbi; xi) . p(n) · specgapi I User's choice between the inversion of ill-conditioned Cholesky decomposition and the QZ algorithm that destroys symmetry Algorithms to address the ill-conditioning 1. Fix-Heiberger'72 (Parlett'80): explicit reduction 2. Chang-Chung Chang'74: SQZ method (QZ by Moler and Stewart'73) 3. Bunse-Gerstner'84: MDR method 4. Chandrasekaran'00: \proper pivoting scheme" 5. Davies-Higham-Tisseur'01: Cholesky+Jacobi 6. Working notes by Kahan'11 and Moler'14 This talk Three approaches: 1. A LAPACK-style implementation of Fix-Heiberger algorithm 2. An algebraic reformulation 3. Locally accelerated block preconditioned steepest descent (LABPSD) This talk Three approaches: 1. A LAPACK-style implementation of Fix-Heiberger algorithm Status: beta-release 2.
    [Show full text]
  • Fast Approximate Spectral Clustering
    Fast Approximate Spectral Clustering Donghui Yan Ling Huang Michael I. Jordan DepartmentofStatistics IntelResearchLab Departmentsof EECS and Statistics University of California Berkeley, CA 94704 University of California Berkeley, CA 94720 Berkeley, CA 94720 Technical Report Abstract Spectral clustering refers to a flexible class of clustering procedures that can produce high-quality clus- terings on small data sets but which has limited applicability to large-scale problems due to its computa- tional complexity of O(n3) in general, with n the number of data points. We extend the range of spectral clustering by developing a general framework for fast approximate spectral clustering in which a distortion- minimizing local transformation is first applied to the data. This framework is based on a theoretical analy- sis that provides a statistical characterization of the effect of local distortion on the mis-clustering rate. We develop two concrete instances of our general framework, one based on local k-means clustering (KASP) and one based on random projection trees (RASP). Extensive experiments show that these algorithms can achieve significant speedups with little degradation in clustering accuracy. Specifically, our algorithms outperform k-means by a large margin in terms of accuracy, and run several times faster than approximate spectral clustering based on the Nystr¨om method, with comparable accuracy and significantly smaller mem- ory footprint. Remarkably, our algorithms make it possible for a single machine to spectral cluster data sets with a million observations within several minutes. 1 Introduction Clustering is a problem of primary importance in data mining, statistical machine learning and scientific discovery. An enormous variety of methods have been developed over the past several decades to solve clus- tering problems [16, 21].
    [Show full text]
  • On the Performance and Energy Efficiency of Sparse Linear Algebra on Gpus
    Original Article The International Journal of High Performance Computing Applications 1–16 On the performance and energy efficiency Ó The Author(s) 2016 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav of sparse linear algebra on GPUs DOI: 10.1177/1094342016672081 hpc.sagepub.com Hartwig Anzt1, Stanimire Tomov1 and Jack Dongarra1,2,3 Abstract In this paper we unveil some performance and energy efficiency frontiers for sparse computations on GPU-based super- computers. We compare the resource efficiency of different sparse matrix–vector products (SpMV) taken from libraries such as cuSPARSE and MAGMA for GPU and Intel’s MKL for multicore CPUs, and develop a GPU sparse matrix–matrix product (SpMM) implementation that handles the simultaneous multiplication of a sparse matrix with a set of vectors in block-wise fashion. While a typical sparse computation such as the SpMV reaches only a fraction of the peak of current GPUs, we show that the SpMM succeeds in exceeding the memory-bound limitations of the SpMV. We integrate this kernel into a GPU-accelerated Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) eigensolver. LOBPCG is chosen as a benchmark algorithm for this study as it combines an interesting mix of sparse and dense linear algebra operations that is typical for complex simulation applications, and allows for hardware-aware optimizations. In a detailed analysis we compare the performance and energy efficiency against a multi-threaded CPU counterpart. The reported performance and energy efficiency results are indicative of sparse computations on supercomputers. Keywords sparse eigensolver, LOBPCG, GPU supercomputer, energy efficiency, blocked sparse matrix–vector product 1Introduction GPU-based supercomputers.
    [Show full text]