Multi-View Clustering Via Canonical Correlation Analysis

Multi-View Clustering Via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Keywords: multi-view learning, clustering, canonical correlation analysis Abstract The basic problem is as follows: we are given indepen- Clustering data in high-dimensions is be- dent samples from a mixture of k distributions, and lieved to be a hard problem in general. A our task is to either: 1) infer properties of the under- number of efficient clustering algorithms de- lying mixture model (e.g. the mixing weights, means, veloped in recent years address this prob- etc) or 2) classify a random sample according to which lem by projecting the data into a lower- distribution it was generated from. dimensional subspace, e.g. via Principal Under no restrictions on the underlying distribution, Components Analysis (PCA) or random pro- this problem is considered to be hard. However, in jections, before clustering. Here, we consider many applications, we are only interested in clustering constructing such projections using multiple the data when the component distributions are \well views of the data, via Canonical Correlation separated". In fact, the focus of recent clustering algo- Analysis (CCA). rithms [Das99, VW02, AM05, BV08] is on efficiently Under the assumption that conditioned on learning with as little separation as possible. Typi- the cluster label the views are uncorrelated, cally, these separation conditions are such that when we show that the separation conditions re- given a random sample form the mixture model, the quired for the algorithm to be successful are Bayes optimal classifier is able to reliably, with high rather mild (significantly weaker than prior probability, recover which cluster generated that point. results in the literature). We provide re- This work assumes a rather natural multi-view as- sults for mixture of Gaussians and mixtures sumption: the assumption is that the views are (con- of log concave distributions. We also provide ditionally) uncorrelated, if we condition on which mix- empirical support from audio-visual speaker ture distribution generated the views. There are many clustering (where we desire the clusters to natural applications for which this assumption applies. correspond to speaker ID) and from hierar- For example, we can consider multi-modal views, with chical Wikipedia document clustering (where one view being a video stream and the other an audio one view is the words in the document and stream of a speaker | here conditioned on the speaker other is the link structure). identity and maybe the phoneme (both of which could label the generating cluster), the views may be uncor- related. A second example is the words and link struc- 1. Introduction ture in a document from a corpus such as Wikipedia The multi-view approach to learning is one in which { here, conditioned on the category of each document, we have `views' of the data (sometimes in a rather the words in it and its link structure may be uncorre- abstract sense) and the goal is to use the relation- lated. In this paper, we provide experiments for both ship between these views to alleviate the difficulty of a settings. learning problem of interest [BM98, KF07, AZ07]. In Under this multi-view assumption, we provide a sim- this work, we explore how having `two views' of the ple and efficient subspace learning method, based data makes the clustering problem significantly more on Canonical Correlation Analysis (CCA). This algo- tractable. rithm is affine invariant and is able to learn with some Much recent work has been done on understanding of the weakest separation conditions to date. The in- under what conditions we can learn a mixture model. tuitive reason for this is that under our multi-view assumption, we are able to (approximately) find the low-dimensional subspace spanned by the means of the Preliminary work. Under review by the International Con- component distributions. This subspace is important, ference on Machine Learning (ICML). Do not distribute. Multi-View Clustering via Canonical Correlation Analysis because, when projected onto this subspace, the means work of [VW02] and [CR08]. [VW02] shows that it is of the distributions are well-separated, yet the typical sufficient to find the subspace spanned by the means distance between points from the same distribution is of the distributions in the mixture for effective cluster- smaller than in the original space. The number of ing. Like our algorithm, [CR08] use a projection onto samples we require to cluster correctly scales as O(d), the top k −1 singular value decomposition subspace of where d is the ambient dimension. Finally, we show the canonical correlations matrix. They also require through experiments that CCA-based algorithms con- a spreading condition, which is related to our require- sistently provide better performance than PCA-based ment on the rank. We borrow techniques from both clustering methods when applied to datasets in two dif- these papers. ferent domains { audio-visual speaker clustering, and [?] propose a similar algorithm for multi-view cluster- hierarchical Wikipedia document clustering by cate- ing, in which, data is projected onto the top few direc- gory. tions obtained by a kernel-CCA of the multi-view data. Our work shows how the multi-view framework can They show empirically, that for clustering images us- provide substantial improvements to the clustering ing associated text, (where the two views are an image, problem, adding to the growing body of results which and text associated with it, and the target clustering is show how the multi-view framework can alleviate the a human-defined category), CCA-based methods out- difficulty of learning problems. perform PCA-based clustering algorithms. Related Work. Most provably efficient clustering In this paper, we study the problem of multi-view clus- algorithms first project the data down to some low tering. We have data on a fixed set of objects from two dimensional space and then cluster the data in this sources, which we call the two views, and our goal is lower dimensional space (typically, an algorithm such to use this fact to cluster more effectively than with as single linkage suffices here). Typically, these al- data from a single source. Following prior theoretical gorithms also work under a separation requirement, work, our goal is to show that our algorithm recovers which is measured by the minimum distance between the correct clustering when the input obeys certain the means of any two mixture components. conditons. One of the first provably efficient algorithms for learn- This Work. Our input is data on a fixed set of ob- ing mixture models is due to [Das99], who learns jects from two views, where View j is generated by a j j a mixture of spherical Gaussians by randomly pro- mixture of k Gaussians (D1;:::;Dk), for j = 1; 2. To jecting the mixture onto a low-dimensional subspace. generate a sample, a source i is picked with probability (1) (2) [VW02] provide an algorithm with an improved sepa- wi, and x and x in Views 1 and 2 are drawn from 1 2 ration requirement that learns a mixture of k spheri- distributions Di and Di . cal Gaussians, by projecting the mixture down to the We impose two requirements on these mixtures. First, k-dimensional subspace of highest variance. [KSV05, we require that conditioned on the source distribution AM05] extend this result to mixtures of general Gaus- in the mixture, the two views are uncorrelated. No- sians; however, they require a separation proportional tice that this is a weaker restriction than the condition to the maximum directional standard deviation of any that given source i, the samples from D1 and D2 are mixture component. i i drawn independently. Moreover, this condition allows [CR08] use a canonical-correlations based algorithm to the distributions in the mixture within each view to be learn mixtures of axis-aligned Gaussians with a sep- completely general, so long as they are uncorrelated aration proportional to σ∗, the maximum directional across views. Although we do not show it theoreti- standard deviation in the subspace containing the cen- cally, we suspect that our algorithm is robust to small ters of the distributions. Their algorithm requires the deviations from this assumption. coordinate-independence property, and an additional Second, we require the rank of the CCA matrix across \spreading" condition. None of these algorithms are the views to be at least k − 1, when each view is in affine invariant. isotropic position, and the k − 1-th singular value of Finally, [BV08] provides an affine-invariant algorithm this matrix to be at least λmin. This condition ensures for learning mixtures of general Gaussians, so long as that there is sufficient correlation between the views. the mixture has a suitably low Fisher coefficient when If the first two conditions hold, then, we can recover in isotropic position. However, their separation in- the subspace containing the means of the distributions volves a rather large polynomial dependence on 1 . in both views. wmin The two results most closely related to ours are the In addition, for mixtures of Gaussians, if in at least Multi-View Clustering via Canonical Correlation Analysis one view, say View 1, we have that for every pair of 2. The Setting distributions i and j in the mixture, We assume that our data is generated by a mixture of k distributions. In particular, we assume we obtain (1) (2) (1) (2) jjµ1 − µ1jj > Cσ∗k1=4plog(n/δ) samples x = (x ; x ), where x and x are the i j two views of the data, which live in the vector spaces V1 of dimension d1 and V2 of dimension d2, respec- j 1 tively. We let d = d1 + d2. Let µ , for i = 1; : : : ; k and for some constant C, where µi is the mean of the i-th i component distribution in view one and σ∗ is the max- j = 1; 2 be the center of distribution i in view j, and imum directional standard deviation in the subspace let wi be the mixing weight for distribution i.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us