Self-Weighted Multiple Kernel Learning for Graph-Based Clustering and Semi-Supervised Classification

Self-Weighted Multiple Kernel Learning for Graph-Based Clustering and Semi-Supervised Classification

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification Zhao Kang1∗, Xiao Lu1, Jinfeng Yi2, Zenglin Xu1∗, 1 SMILE Lab, School of Computer Science and Engineering University of Electronic Science and Technology of China, Sichuan 611731, China 2 JD AI Research, Beijing 100101, China [email protected], [email protected], [email protected], [email protected] Abstract base kernels [Xu et al., 2009]. Generally speaking, MKL method should yield a better performance than that of single Multiple kernel learning (MKL) method is gener- kernel approach. ally believed to perform better than single kernel A key step in MKL is to assign a reasonable weight to method. However, some empirical studies show each kernel according to its importance. One popular ap- that this is not always true: the combination of mul- proach considers a weighted combination of candidate ker- tiple kernels may even yield an even worse perfor- nel matrices, leading to a convex quadratically constraint mance than using a single kernel. There are two quadratic program. However, this method over-reduces the possible reasons for the failure: (i) most existing feasible set of the optimal kernel, which may lead to a MKL methods assume that the optimal kernel is a less representative solution. In fact, these MKL algorithms linear combination of base kernels, which may not sometimes fail to outperform single kernel methods or tra- hold true; and (ii) some kernel weights are inap- ditional non-weighted kernel approaches [Yu et al., 2010; propriately assigned due to noises and carelessly Gehler and Nowozin, 2009]. Another issue is the inappro- designed algorithms. In this paper, we propose a priate weights assignment. Some attempts aim to learn the novel MKL framework by following two intuitive local importance of features by assuming that samples may assumptions: (i) each kernel is a perturbation of the vary locally [Gonen¨ and Alpaydin, 2008]. However, they in- consensus kernel; and (ii) the kernel that is close duce more complex computational problems. to the consensus kernel should be assigned a large weight. Impressively, the proposed method can au- To address these issues, in this paper, we model the differ- tomatically assign an appropriate weight to each ences among kernels by following two intuitive assumptions: kernel without introducing additional parameters, (i) each kernel is a perturbation of the consensus kernel; and as existing methods do. The proposed framework is (ii) the kernel that is close to the consensus kernel should re- integrated into a unified framework for graph-based ceive a large weight. As a result, instead of enforcing the clustering and semi-supervised classification. We optimal kernel being a linear combination of predefined ker- have conducted experiments on multiple bench- nels, this approach allows the most suitable kernel to reside in mark datasets and our empirical results verify the some kernels’ neighborhood. And our proposed method can superiority of the proposed framework. assign an optimal weight for each kernel automatically with- out introducing an additive parameter as existing methods do. Then we combine this novel weighting scheme with graph- 1 Introduction based clustering and semi-supervised learning (SSL). Due to As a principled way of introducing non-linearity into linear its effectiveness in similarity graph construction, graph-based models, kernel methods have been widely applied in many clustering and SSL have shown impressive performance [Nie machine learning tasks [Hofmann et al., 2008; Xu et al., et al., 2017a; Kang et al., 2017b]. Finally, a novel multiple 2010]. Although improved performance has been reported kernel learning framework for clustering and semi-supervised in a wide variety of problems, the kernel methods require learning is developed. the user to select and tune a single pre-defined kernel. This In summary, our main contributions are two-fold: is not user-friendly since the most suitable kernel for a spe- • We proposed a novel way to construct the optimal kernel cific task is usually challenging to decide. Moreover, it is and assign weights to base kernels. Notably, our pro- time-consuming and impractical to exhaustively search from posed method can find a better kernel in the neighbor- a large pool of candidate kernels. Multiple kernel learning hood of candidate kernels. This weight is a function of (MKL) was proposed to address this issue as it offers an au- kernel matrix, so we do not need to introduce an addi- tomatical way of learning an optimal combination of distinct tional parameter as existing methods do. This also eases ∗Corresponding author. the burden of solving the constraint quadratic program. 2312 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) • A unified framework for clustering and SSL is devel- kernel method. This will hinder the practical use of MKL. oped. It seamlessly integrates the components of graph This phenomenon has been observed for many years [Cortes, construction, label learning, and kernel learning by in- 2009] but rarely studied. Thus, it is vital to develop some new corporating the graph structure constraint. This allows approaches. them to negotiate with each other to achieve overall opti- mality. Our experiments on multiple real-world datasets 3 Methodology verify the effectiveness of the proposed framework. 3.1 Notations 2 Related Work Throughout the paper, all the matrices are written as upper- case. For a matrix X, its ij-th element and j-th column is de- In this section, we divide the related work into two categories, noted as xij and Xj, respectively. The trace of X is denoted namely graph-based clusteirng and SSL, and paremeter- by T r(X). The i-th kernel of X is written as Hi. The p-norm weighted multiple kernel learning. of vector x is represented by kxkp. The Frobenius norm of 2.1 Graph-based Clustering and SSL matrix X is denoted by kXkF . I is an identity matrix with proper size. S ≥ 0 means all entries of S are nonnegative. Graph-based clustering [Ng et al., 2002; Yang et al., 2017] and SSL [Zhu et al., 2003] have been popular for its simple 3.2 Self-weighted Multiple Kernel Learning and impressive performance. The graph matrix to measure Aforementioned self-expressiveness based graph learning the similarity of data points is crucial to their performance method can be formulated as: and there is no satisfying solution for this problem. Recently, 2 2 min kX − XSkF + γkSkF s:t: S ≥ 0; (2) automatic learning graph from data has achieved promising S results. One approach is based on adaptive neighbor idea, where γ is the trade-off parameter. To recap the powerfulness i.e., sij is learned as a measure of the probability that Xi is neighbor of X . Then S is treated as graph input to do of kernel method, we extend Eq. (2) to its kernel version j φ clustering [Nie et al., 2014; Huang et al., 2018b] and SSL by using kernel mapping . According to the kernel trick K(x; z) = φ(x)T φ(z) [Nie et al., 2017a]. Another one is using the so-called self- , we have 2 2 expressiveness property, i.e., each data point is expressed as a min kφ(X) − φ(X)SkF + γkSkF ; weighted combination of other points and this learned weight S T 2 matrix behaves like the graph matrix. Representative work () min T r(K − 2KS + S KS) + γkSkF (3) in this category include [Huang et al., 2015; Li et al., 2015; S Kang et al., 2018]. These methods are all developed in the s:t: S ≥ 0 original feature space. To make it more general, we develop Ideally, we hope to achieve a graph with exactly c connected our model in kernel space in this paper. Our purpose is to components if the data contain c clusters or classes. In other learn a graph with exactly c number of connected components words, the graph S is block diagonal with proper permuta- if the data contains c clusters or classes. In this work, we will tions. It is straightforward to check that S in Eq. (3) can consider this condition explicitly. hardly satisfy to such a constraint condition. If the similarity graph matrix S is nonnegative, then the 2.2 Parameter-weighted Multiple Kernel Learning Laplacian matrix L = D −S, where D is the diagonal degree P It is well-known that the performance of kernel method cru- matrix defined as dii = j sij , associated with S has an cially depends on the kernel function as it intrinsically speci- important property as follows [Mohar et al., 1991] fies the feature space. MKL is an efficient way for automatic kernel selection and embedding different notions of similarity Theorem 1 The multiplicity c of the eigenvalue 0 of the [Kang et al., 2017a]. It is generally formulated as follows: Laplacian matrix L is equal to the number of connected com- ponents in the graph associated with S. X i X q min f(K) s:t: K = θiH ; (θi) = 1; θi ≥ 0; Theorem 1 indicates that if rank(L) = n − c, then the con- θ i i straint on S will be held. Therefore, the problem (3) can be (1) rewritten as: where f is the objective function, K is the consensus ker- T 2 i min T r(K − 2KS + S KS) + γkSkF nel, H is our artificial constructed kernel, θi represents the S i (4) weight for kernel H , q > 0 is used to smoothen the weight s:t: S ≥ 0; rank(L) = n − c distribution. Therefore, we frequently sovle θ and tune q. Though this approach is widely used, it still suffers the fol- Suppose σi(L) is the i-th smallest eigenvalue of L.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us