
Multi-View Clustering via Canonical Correlation Analysis Kamalika Chaudhuri [email protected] UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakade [email protected] Karen Livescu [email protected] Karthik Sridharan [email protected] Toyota Technological Institute at Chicago, 6045 S. Kenwood Ave., Chicago, IL Keywords: multi-view learning, clustering, canonical correlation analysis Abstract ing problem of interest [BM98, KF07, AZ07]. In this work, we explore how having two ‘views’ makes the Clustering data in high dimensions is be- clustering problem significantly more tractable. lieved to be a hard problem in general. A number of efficient clustering algorithms de- Much recent work has been done on understanding veloped in recent years address this prob- under what conditions we can learn a mixture model. lem by projecting the data into a lower- The basic problem is as follows: We are given indepen- dimensional subspace, e.g. via Principal dent samples from a mixture of k distributions, and Components Analysis (PCA) or random pro- our task is to either: 1) infer properties of the under- jections, before clustering. Here, we consider lying mixture model (e.g. the mixing weights, means, constructing such projections using multiple etc.) or 2) classify a random sample according to which views of the data, via Canonical Correlation distribution in the mixture it was generated from. Analysis (CCA). Under no restrictions on the underlying mixture, this Under the assumption that the views are un- problem is considered to be hard. However, in many correlated given the cluster label, we show applications, we are only interested in clustering the that the separation conditions required for data when the component distributions are “well sep- the algorithm to be successful are signifi- arated”. In fact, the focus of recent clustering algo- cantly weaker than prior results in the lit- rithms [Das99, VW02, AM05, BV08] is on efficiently erature. We provide results for mixtures learning with as little separation as possible. Typ- of Gaussians and mixtures of log concave ically, the separation conditions are such that when distributions. We also provide empirical given a random sample from the mixture model, the support from audio-visual speaker clustering Bayes optimal classifier is able to reliably recover (where we desire the clusters to correspond to which cluster generated that point. speaker ID) and from hierarchical Wikipedia document clustering (where one view is the This work makes a natural multi-view assumption: words in the document and the other is the that the views are (conditionally) uncorrelated, con- link structure). ditioned on which mixture component generated the views. There are many natural applications for which 1. Introduction this assumption applies. For example, we can con- sider multi-modal views, with one view being a video The multi-view approach to learning is one in which stream and the other an audio stream, of a speaker — we have ‘views’ of the data (sometimes in a rather ab- here, conditioned on the speaker identity and maybe stract sense) and the goal is to use the relationship be- the phoneme (both of which could label the generat- tween these views to alleviate the difficulty of a learn- ing cluster), the views may be uncorrelated. A second th example is the words and link structure in a document Appearing in Proceedings of the 26 International Confer- from a corpus such as Wikipedia – here, conditioned ence on Machine Learning, Montreal, Canada, 2009. Copy- right 2009 by the author(s)/owner(s). on the category of each document, the words in it and Multi-View Clustering via Canonical Correlation Analysis its link structure may be uncorrelated. In this paper, None of these algorithms are affine invariant. we provide experiments for both settings. Finally, [BV08] provide an affine-invariant algorithm Under this multi-view assumption, we provide a sim- for learning mixtures of general Gaussians, so long as ple and efficient subspace learning method, based the mixture has a suitably low Fisher coefficient when on Canonical Correlation Analysis (CCA). This algo- in isotropic position. However, their separation in- rithm is affine invariant and is able to learn with some volves a large polynomial dependence on 1 . wmin of the weakest separation conditions to date. The in- The two results most closely related to ours are the tuitive reason for this is that under our multi-view work of [VW02] and [CR08]. [VW02] show that it is assumption, we are able to (approximately) find the sufficient to find the subspace spanned by the means low-dimensional subspace spanned by the means of of the distributions in the mixture for effective cluster- the component distributions. This subspace is impor- ing. Like our algorithm, [CR08] use a projection onto tant, because, when projected onto this subspace, the the top k −1 singular value decomposition subspace of means of the distributions are well-separated, yet the the canonical correlations matrix. They also require typical distance between points from the same distri- a spreading condition, which is related to our require- bution is smaller than in the original space. The num- ment on the rank. We borrow techniques from both of ber of samples we require to cluster correctly scales these papers. as O(d), where d is the ambient dimension. Finally, we show through experiments that CCA-based algo- [BL08] propose a similar algorithm for multi-view clus- rithms consistently provide better performance than tering, in which data is projected onto the top direc- standard PCA-based clustering methods when applied tions obtained by kernel CCA across the views. They to datasets in the two quite different domains of audio- show empirically that for clustering images using the visual speaker clustering and hierarchical Wikipedia associated text as a second view (where the target clus- document clustering by category. tering is a human-defined category), CCA-based clus- tering methods out-perform PCA-based algorithms. Our work adds to the growing body of results which show how the multi-view framework can alleviate the This Work. Our input is data on a fixed set of ob- difficulty of learning problems. jects from two views, where View j is assumed to be j j generated by a mixture of k Gaussians (D1,...,Dk), Related Work. Most provably efficient clustering for j = 1, 2. To generate a sample, a source i is picked (1) (2) algorithms first project the data down to some low- with probability wi, and x and x in Views 1 and 1 2 dimensional space and then cluster the data in this 2 are drawn from distributions Di and Di . Following lower dimensional space (an algorithm such as sin- prior theoretical work, our goal is to show that our al- gle linkage usually suffices here). Typically, these al- gorithm recovers the correct clustering, provided the gorithms also work under a separation requirement, input mixture obeys certain conditons. which is measured by the minimum distance between We impose two requirements on these mixtures. First, the means of any two mixture components. we require that conditioned on the source, the two One of the first provably efficient algorithms for learn- views are uncorrelated. Notice that this is a weaker ing mixture models is due to [Das99], who learns restriction than the condition that given source i, the 1 2 a mixture of spherical Gaussians by randomly pro- samples from Di and Di are drawn independently. jecting the mixture onto a low-dimensional subspace. Moreover, this condition allows the distributions in the [VW02] provide an algorithm with an improved sepa- mixture within each view to be completely general, so ration requirement that learns a mixture of k spheri- long as they are uncorrelated across views. Although cal Gaussians, by projecting the mixture down to the we do not prove this, our algorithm seems robust to k-dimensional subspace of highest variance. [KSV05, small deviations from this assumption. AM05] extend this result to mixtures of general Gaus- Second, we require the rank of the CCA matrix across sians; however, they require a separation propor- the views to be at least k − 1, when each view is in tional to the maximum directional standard deviation isotropic position, and the k − 1-th singular value of of any mixture component. [CR08] use a canonical this matrix to be at least λ . This condition ensures correlations-based algorithm to learn mixtures of axis- min that there is sufficient correlation between the views. aligned Gaussians with a separation proportional to If the first two conditions hold, then we can recover σ∗, the maximum directional standard deviation in the the subspace containing the means in both views. subspace containing the means of the distributions. Their algorithm requires a coordinate-independence In addition, for mixtures of Gaussians, if in at least property, and an additional “spreading” condition. one view, say View 1, we have that for every pair of Multi-View Clustering via Canonical Correlation Analysis distributions i and j in the mixture, of dimension d1 and V2 of dimension d2, respectively. j We let d = d1+d2. Let µi , for i = 1, . , k and j = 1, 2, 1 1 ∗ 1/4p ||µi − µj || > Cσ k log(n/δ) be the mean of distribution i in view j, and let wi be the mixing weight for distribution i. for some constant C, then our algorithm can also de- termine which component each sample came from. For simplicity, we assume that the data have mean 0. 1 We denote the covariance matrix of the data as: Here µi is the mean of the i-th component in View ∗ 1 and σ is the maximum directional standard devi- > (1) (1) > ation in the subspace containing the means in View Σ = E[xx ], Σ11 = E[x (x ) ] (2) (2) > (1) (2) > 1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-