Collaborative Hashing

Collaborative Hashing

Collaborative Hashing Xianglong Liuy Junfeng Hez Cheng Deng\ Bo Langy yState Key Lab of Software Development Environment, Beihang University, Beijing, China zFacebook, 1601 Willow Rd, Menlo Park, CA, USA \Xidian University, Xi’an, China fxlliu, [email protected] [email protected] [email protected] Abstract binary codes by exploiting the data correlations among the data entities (e.g., the cosine similarities between Hashing technique has become a promising approach for feature vectors). In practice, there are many scenarios fast similarity search. Most of existing hashing research involving nearest neighbor search on the data matrix with pursue the binary codes for the same type of entities by two dimensions corresponding to two different coupled preserving their similarities. In practice, there are many views or entities. For instance, the classic textual retrieval scenarios involving nearest neighbor search on the data usually works based on the term-document matrix with given in matrix form, where two different types of, yet each element representing the correlations between two naturally associated entities respectively correspond to its views: words and documents. Recently, such bag-of- two dimensions or views. To fully explore the duality words (BoW) model has also been widely used in computer between the two views, we propose a collaborative hashing vision and multimedia retrieval, which mainly captures the scheme for the data in matrix form to enable fast search correlations between local features (even the dictionaries in various applications such as image search using bag of using sparse coding) and visual objects [16, 17]. Besides words and recommendation using user-item ratings. By the search applications, the rapidly developed recommen- simultaneously preserving both the entity similarities in dation studies based on collaborative filtering, analyzing each view and the interrelationship between views, our relationships between users and interdependencies among collaborative hashing effectively learns the compact binary items to identify new user-item associations, also impose codes and the explicit hash functions for out-of-sample strong requirements on the nearest neighbor search with a extension in an alternating optimization way. Extensive collected rating matrix between users and items [2, 10, 20]. evaluations are conducted on three well-known datasets In the literature, rare works have been reported to for search inside a single view and search across different regard the hashing problem with the matrix-form data, views, demonstrating that our proposed method outper- considering the duality between different types of entities or forms state-of-the-art baselines, with significant accuracy views. It is difficult to directly apply the traditional hashing gains ranging from 7.67% to 45.87% relatively. methods to simultaneously hashing two types of entities and meanwhile preserving their correlations. Recently, [19] and [20] attempted to address the problem by concentrating 1. Introduction on the duality between views under different scenarios. [19] viewed both documents and terms as the same type Recently hashing technique has attracted great attentions of entities in a bipartite graph, and pursued the binary in fast similarity search [3,5, 12–15, 18]. Based on the codes for them following the idea of spectral hashing [18]. concept of locality sensitivity [7], it represents data points To handle the unobserved/missing ratings in collaborative using the binary codes that preserve the original similarities filtering, [20] directly learned the binary codes that can among data. The retrieval of similar data points can then recover the observed item preferences of all users. Both be completed in a sublinear or even constant time, by methods mainly explore the correlations between two views Hamming distance ranking based on fast binary operation in order to preserve the occurrences or preferences in the or hash table lookup within a certain Hamming distance. training matrix. However, they neglected the fact that the Moreover, the binary compression of original data can entities’ neighbor structures inside each view play rather largely reduce the storage consumption. important roles in compact binary codes pursuit in nearest Existing hashing approaches usually find the compact neighbor search [4,6] or collective patterns discovery in 1 Rotation Rotation U Rh Rg Projection h Intra-View I U 1 0 2 0 5 0 0 g Projection Intra-View II 0 0 5 3 0 … 0 2 Quantization Quantization 0 3 0 0 5 1 3 …… T 1 0 1 0 0 1 0 1 1 0 1 4 0 0 2 0 … 3 0 1 1 0 0 1 … 0 1 0 0 1 0 … 1 1 X 2 2 … … … … Inter-View U … … Prediction 1 0 0 1 1 1 0 0 1 1 1 H G Figure 1. The proposed collaborative hashing for nearest neighbor search over data in matrix form. collaborative filtering [10, 20]. function, which incorporates the binary quantization loss In this paper we propose a collaborative hashing (CH) for the neighbor relations in each view, and the divergence scheme for nearest neighbor search with data in matrix between the training data and the predictions based on the form. By simultaneously preserving both the inner simi- binary codes for the interrelationships across views. larities in each view (intra-view) and the interrelationships 3. We adopt the projection-based linear hash functions, across views (inter-view), it learns the compact hash codes and present an alternating optimization way to effectively and the hash functions for entities in two views. As learn both the binary codes and the explicit hash functions Figure1 demonstrates, the proposed collaborative scheme for out-of-sample extension. fundamentally works as follows: On the one hand, fusing Our extensive empirical study on several large-scale the semantic correlations between the coupled views will benchmarks highlights the benefits of our method for enhance the discriminative power of the features in each visual search and item recommendation, with significant view, and subsequently helps obtain more compact and performance gains over state-of-the-art hashing methods. meaningful hash codes for each view. On the other hand, by retaining the duality between two correlated views based on 2. Collaborative Hashing the collective patterns discovered through hash grouping in each view, the collaborative hashing embeds different types 2.1. Notation of entities into a joint semantic Hamming space, which Suppose we have the data matrix U = (u ) 2 n×m, enables fast and accurate search across views. ij R consisting of n entities (images, users, etc.) of type I in Therefore, the proposed collaborative hashing can serve one dimension and m entities (visual words, items, etc.) of as a unified framework for applications including: (1) type II in another dimension. Therefore, the matrix actually search inside a single view: the most typical example is characterizes the associations between the two types of the visual search using local descriptors [16,17]; (2) search entities in terms of observations like word counts or user- across different views: Recommendation using user-item item ratings: Each entry uij in the matrix indicates the ratings falls in this direction [2, 10, 20]. In many successful presence of entity j of type II in entity i of type I, and its recommendation algorithms, matrix factorization serves as value reflects the strength of their association. a core technique for collaborative filtering [10]. It turns the Existing hashing algorithms either independently treat item suggestions into the search based on their similarities the two views of the data matrix [2,4], or mainly concentrate in a low dimensional Euclidean space. Instead of the on the correlation information between the two views [19, Euclidean space, our collaborative hashing aims to find 20]. In our collaborative hashing, we will simultaneously binary codes in Hamming space, guaranteeing efficient learn the discriminative hash functions for both views, storage and fast similarity search across users and items. which not only preserve the locality in each view, but also We essentially summarize our contributions as follows: capture the correlations between views. 1. We propose a collaborative hashing scheme general Specifically, the problem of collaborative hashing can be for nearest neighbor search with data in matrix form. By formally defined as respectively learning b hash functions simultaneously considering both the intra-view (in each hi(·); i = 1; : : : ; b and gj(·); j = 1; : : : ; b for different view) and the inter-view (across views) relationships in one types of entities, and generating the corresponding compact b×n framework, it enables fast similarity search widely involved binary codes H = [h1 ::: hn] 2 {−1; +1g and b×m in many applications such as image search using bag of G = [g1 ::: gm] 2 {−1; +1g that well preserve words and recommendation using user-item ratings. the semantic relationships conveyed by the matrix U. 2. We formulate it as an optimization problem with a loss For simplicity, we apply the linear-form hash functions T T hi(x) = sign(wi x) and gi(x) = sign(vi x) where W = feature, of which each element is regarded to be observed [w1 ::: wb] and V = [v1 ::: vb] are the projection occurrence. For the sparse observation, we will give a parameters for both types of hash functions. simple, yet efficient solution in Section 2.5.2. Putting the quantization error and prediction error to- 2.2. Formulation gether, we can minimize the following objective function Our basic idea is trying to exploit the intrinsic relations E = E + λE ; (5) in the matrix data: both the neighbor structures in each quan rate view and the semantic correlations between views. Corre- where λ is a fixed weight, balancing the importance of spondingly, two types of losses will be employed to guide similarities in a single view and across views. By rewriting the hash function learning in our formulation: the binary the objective in matrix form with an n × m matrix of ones quantization loss of hash functions and the correlation J, we can get prediction loss using the binary codes.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us