Learning a Distance Metric from Relative Comparisons

Learning a Distance Metric from Relative Comparisons

Learning a Distance Metric from Relative Comparisons Matthew Schultz and Thorsten Joachims Department of Computer Science Cornell University Ithaca, NY 14853 schultz,tj @cs.cornell.edu Abstract This paper presents a method for learning a distance metric from rel- ative comparison such as “A is closer to B than A is to C”. Taking a Support Vector Machine (SVM) approach, we develop an algorithm that provides a flexible way of describing qualitative training data as a set of constraints. We show that such constraints lead to a convex quadratic programming problem that can be solved by adapting standard meth- ods for SVM training. We empirically evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents. 1 Introduction Distance metrics are an essential component in many applications ranging from supervised learning and clustering to product recommendations and document browsing. Since de- signing such metrics by hand is difficult, we explore the problem of learning a metric from examples. In particular, we consider relative and qualitative examples of the form “A is closer to B than A is to C”. We believe that feedback of this type is more easily available in many application setting than quantitative examples (e.g. “the distance between A and B is 7.35”) as considered in metric Multidimensional Scaling (MDS) (see [4]), or absolute qualitative feedback (e.g. “A and B are similar”, “A and C are not similar”) as considered in [11]. Building on the study in [7], search-engine query logs are one example where feedback of the form “A is closer to B than A is to C” is readily available for learning a (more semantic) similarity metric on documents. Given a ranked result list for a query, documents that are clicked on can be assumed to be semantically closer than those documents that the Ð Ð user observed but decided to not click on (i.e. “A Ð is closer to B than A is to Ð ÒÓÐ CÒÓÐ ”). In contrast, drawing the conclusion that “A and C are not similar” is probably less justified, since a C ÒÓÐ high in the presented ranking is probably still closer to A Ð than most documents in the collection. In this paper, we present an algorithm that can learn a distance metric from such relative and qualitative examples. Given a parametrized family of distance metrics, the algorithms discriminately searches for the parameters that best fulfill the training examples. Taking a maximum-margin approach [9], we formulate the training problem as a convex quadratic program for the case of learning a weighting of the dimensions. We evaluate the perfor- mance and the modelling flexibility of the algorithm on a collection of text documents. Ü The notation used throughout this paper is as follows. Vectors are denoted with an arrow Ø Ü Ü ¼ where is the entry in vector . The vector is the vector composed of all zeros, and Ì ½ Ü Ü is the vector composed of all ones. is the transpose of vector and the dot product Ì Ì Ü Ý Ü ´Ü Ü µ Ò is denoted by . We denote the element-wise product of two vectors ½ Ì Ì Ý ´Ý Ý µ Ü £ Ý ´Ü Ý Ü Ý µ Ò ½ ½ Ò Ò and ½ as . 2 Learning from Relative Qualitative Feedback Æ Ü ¾ We consider the following learning setting. Given is a set ØÖ Ò of objects .As È training data, we receive a subset ØÖ Ò of all potential relative comparisons defined over ´ µ ¾ È Ü Ü Ü ¾ ØÖ Ò ØÖ Ò the set ØÖ Ò . Each relative comparison with has the semantic Ü Ü Ü Ü is closer to than is to . ´¡ ¡µ È ØÖ Ò Û The goal of the learner is to learn a weighted distance metric from and ØÖ Ò that best approximates the desired notion of distance on a new set of test points ´¡ ¡µ Ø×Ø ØÖ Ò Ø×Ø Û , . We evaluate the performance of a metric by how many È relative comparisons Ø×Ø it fulfills on the test set. 3 Parameterized Distance Metrics ´Ü Ý µ Ü Ý A (pseudo) distance metric is a function over pairs of objects and from some Ü Ý Ü Ý set X. d( ) is a pseudo metric, iff it obeys the four following properties for all , and Þ : ´Ü Ü µ¼ ´Ü Ý µ ´Ý ܵ ´Ü Ý µ ¼ ´Ü Ý µ· ´Ý Þ µ ´Ü Þ µ ´Ü Ý µ¼µ Ü Ý It is a metric, iff it also obeys . Æ ´Ü Ý µ Ü Ý ¾ In this paper, we consider a distance metric Ï between vectors param- Ï eterized by two matrices, and . Õ Ì Ì ´Ü Ý µ ´Ü Ý µ Ï ´Ü Ý µ Ï (1) Ï is a diagonal matrix with non-negative entries and is any real matrix. Note that the Ì Ï ´Ü Ý µ matrix is semi-positive definite so that Ï is a valid distance metric. This parametrization is very flexible. In the simplest case, A is the identity matrix, Á , Ô Ô Ì Ì Ì ´Ü Ý µ ´Ü Ý µ ÁÏ Á ´Ü Ý µ ´Ü Ý µ Ï ´Ü Ý µ and ÁÏ is a weighted, Eu- Ô È ¾ ´Ü Ý µ Ï ´Ü Ý µ clidean distance ÁÏ . In a general case, A can be any real matrix. This corresponds to applying a linear transfor- mation to the input data with the matrix A. After the transformation, the distance becomes Ì Ì Ü Ý a Euclidean distance on the transformed input points , . Õ Ì Ì ´Ü Ý µ ´´Ü Ý µ µÏ ´ ´Ü Ý µµ Ï (2) ´Ü Ý µ ´Ü µ´Ý µ ¨ The use of kernels à suggests a particular choice of . Let be the Ü matrix where the i-th column is the (training) vector projected into a feature space using ´Ü µ the function . Then Õ Ì Ì ´´Ü µ´Ý µµ ´´´Ü µ ´Ý µµ ¨µÏ ´¨ ´´Ü µ ´Ý µµµ Ï ¨ (3) Ú Ù Ò Ù Ø ¾ Ï ´Ã ´Ü Ü µ à ´Ý Ü µµ (4) ½ is a distance metric in the feature space. 4 An SVM Algorithm for Learning from Relative Comparisons È Ò ØÖ Ò Given a training set ØÖ Ò of relative comparisons over a set of vectors , and Ï the matrix , we aim to fit the parameters in the diagonal matrix of distance metric ´Ü Ý µ Ï so that the training error (i.e. the number of violated constraints) is minimized. Finding a solution of zero training error is equivalent to finding a Ï that fulfills the fol- lowing set of constraints. ´ µ ¾ È ´Ü Ü µ ´Ü Ü µ ¼ Ï Ï ØÖ Ò (5) If the set of constraints is feasible and a Ï exists that fulfills all constraints, the solution Ì Ï ´Ü Ý µ is typically not unique. We aim to select a matrix such that Ï remains as close to an unweighted Euclidean metric as possible. Following [8], we minimize the ¾ Ì ¾ Ì ¾ £ Ï £ Ï norm of the eigenvalues of . Since , this leads to the following optimization problem. ½ Ì ¾ Ï ÑÒ ¾ Ì Ì Ì Ì ×Ø ´ µ ¾ È ´Ü Ü µ Ï ´Ü Ü µ ´Ü Ü µ Ï ´Ü Ü µ ½ ØÖ Ò Ï ¼ ´Ü Ý µ Unlike in [8], this formulation ensures that Ï is a metric, avoiding the need for semi-definite programming like in [11]. As in classification SVMs, we add slack variables [3] to account for constraints that cannot be satisfied. This leads to the following optimization problem. ½ Ì ¾ ÑÒ Ï · ¾ Ì Ì Ì Ì ×Ø ´ µ ¾ È ´Ü Ü µ Ï ´Ü Ü µ ´Ü Ü µ Ï ´Ü Ü µ ½ ØÖ Ò ¼ Ï ¼ The sum of the slack variables in the objective is an upper bound on the number of violated constraints. ´Ü Ý µ Û All distances Ï can be written in the following linear form. If we let be the diagonal elements of W then the distance Ï can be written as Õ Ì Ì ´Ü Ý µ ´´Ü Ý µ µÏ ´ ´Ü Ý µµ Ï Õ Ì Ì Ì Ì Ì Û ´ Ü Ý µ £ ´ Ü Ý µ (6) Ü Ü Ì Ì Ì £ ¡ ´ Ü Ü µ £ ´ Ü where denotes the element-wise product. If we let Ì Ü µ , then the constraints in the optimization problem can be rewritten in the following linear form. Ü Ü Ì Ü Ü µ ½ ¡ ´ µ ¾ È Û ´¡ ØÖ Ò (7) 1a) 1b) 2a) 2b) Figure 1: Graphical example of using different A matrices. In example 1, A is the iden- tity matrix and in example 2 A is composed of the training examples projected into high dimensional space using an RBF kernel. Furthermore, the objective function is quadratic, so that the optimization problem can be written as ½ Ì ÑÒ Û ÄÛ · ¾ Ü Ü Ì Ü Ü µ ½ ¡ ×Ø ´ µ ¾ È Û ´¡ ØÖ Ò ¼ Ï ¼ (8) Ì ¾ Ì Á Ï Ä Á ¨ Û ÄÛ For the case of , with . For the case of ,we Ì Ì Ì ¾ Ì Ä ´ µ £ ´ µ Ï Ä Û ÄÛ define so that . Note that is positive semi- definite in both cases and that, therefore, the optimization problem is convex quadratic. 5 Experiments In Figure 1, we display a graphical example of our method. Example 1 is an example of a weighted Euclidean distance. The input data points are shown in 1a) and our training constraints specify that the distance between two square points should be less than the dis- tance to a circle.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us