## Learning a distance metric from relative comparisons (2004)

### Cached

### Download Links

- [www.cs.cornell.edu]
- [www.cs.cornell.edu]
- [www.cs.cornell.edu]
- [books.nips.cc]
- DBLP

### Other Repositories/Bibliography

Venue: | In Proceedings of Neural Information Processing Systems |

Citations: | 133 - 0 self |

### BibTeX

@INPROCEEDINGS{Schultz04learninga,

author = {Matthew Schultz and Thorsten Joachims},

title = {Learning a distance metric from relative comparisons},

booktitle = {In Proceedings of Neural Information Processing Systems},

year = {2004}

}

### Years of Citing Articles

### OpenURL

### Abstract

This paper presents a method for learning a distance metric from relative comparison such as “A is closer to B than A is to C”. Taking a Support Vector Machine (SVM) approach, we develop an algorithm that provides a flexible way of describing qualitative training data as a set of constraints. We show that such constraints lead to a convex quadratic programming problem that can be solved by adapting standard methods for SVM training. We empirically evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents. 1

### Citations

9811 | Statistical Learning Theory
- Vapnik
- 1998
(Show Context)
Citation Context ...litative examples. Given a parametrized family of distance metrics, the algorithms discriminately searches for the parameters that best fulfill the training examples. Taking a maximum-margin approach =-=[9]-=-, we formulate the training problem as a convex quadraticprogram for the case of learning a weighting of the dimensions. We evaluate the performance and the modelling flexibility of the algorithm on ... |

2985 | Indexing by latent semantic analysis
- Deerwester, Dumais, et al.
- 1990
(Show Context)
Citation Context ...cent to adapt a parameterized distance metric according to user feedback. Other related work are dimension reduction techniques such as Multidimensional Scaling (MDS) [4] and Latent Semantic Indexing =-=[6]-=-. Metric MDS techniques take as input a matrix D of dissimilarities (or similarities) between all points in some collection and then seeks to arrange the points in a d-dimensional space to minimize th... |

2353 | Support-vector network
- Cortes, Vapnik
- 1995
(Show Context)
Citation Context ...�� � � �� �� � �� � Unlike in [8], this formulation ensures that � �� �� � is a metric, avoiding the need for semi-definite programming like in [11]. As in classification SVMs, we add slack variables =-=[3]-=- to account for constraints that cannot be satisfied. This leads to the following optimization problem. � ��� � �� � �� ����� ���� � � � ����� �� � �� �� � � �� �� �� �� � � �� �� � ���� ���� � �� � T... |

925 | Optimizing search engines using clickthrough data
- Joachims
- 2002
(Show Context)
Citation Context ...considered in metric Multidimensional Scaling (MDS) (see [4]), or absolute qualitative feedback (e.g. “A and B are similar”, “A and C are not similar”) as considered in [11]. Building on the study in =-=[7]-=-, search-engine query logs are one example where feedback of the form “A is closer to B than A is to C” is readily available for learning a (more semantic) similarity metric on documents. Given a rank... |

540 | Distance metric learning with applications to clustering with side information
- Xing, Ng, et al.
- 2003
(Show Context)
Citation Context ...e between A and B is 7.35”) as considered in metric Multidimensional Scaling (MDS) (see [4]), or absolute qualitative feedback (e.g. “A and B are similar”, “A and C are not similar”) as considered in =-=[11]-=-. Building on the study in [7], search-engine query logs are one example where feedback of the form “A is closer to B than A is to C” is readily available for learning a (more semantic) similarity met... |

437 |
Multidimensional Scaling
- Cox, Cox
- 1994
(Show Context)
Citation Context ... of this type is more easily available in many application setting than quantitative examples (e.g. “the distance between A and B is 7.35”) as considered in metric Multidimensional Scaling (MDS) (see =-=[4]-=-), or absolute qualitative feedback (e.g. “A and B are similar”, “A and C are not similar”) as considered in [11]. Building on the study in [7], search-engine query logs are one example where feedback... |

359 | Learning to extract symbolic knowledge from the World Wide Web
- Craven, DiPasquo, et al.
- 1998
(Show Context)
Citation Context ...ed an RBF kernel and learned a distance metric to separate the clusters. The result is shown in 2b. To validate the method using a real world example, we ran several experiments on the WEBKB data set =-=[5]-=-. In order to illustrate the versatility of relative comparisons, we generated three different distance metrics from the same data set and ran three types of tests: an accuracy test, a learning curve ... |

350 | Constrained k-means clustering with background knowledge
- Wagstaff, Cardie, et al.
- 2001
(Show Context)
Citation Context ... from the relative constraints considered here. Secondly, their method does not use regularization. Related are also techniques for semi-supervised clustering, as it is also considered in [11]. While =-=[10]-=- does not change the distance metric, [2] uses gradient descent to adapt a parameterized distance metric according to user feedback. Other related work are dimension reduction techniques such as Multi... |

102 | Semi-supervised clustering with user feedback
- Cohn, Caruana, et al.
- 2003
(Show Context)
Citation Context ...here. Secondly, their method does not use regularization. Related are also techniques for semi-supervised clustering, as it is also considered in [11]. While [10] does not change the distance metric, =-=[2]-=- uses gradient descent to adapt a parameterized distance metric according to user feedback. Other related work are dimension reduction techniques such as Multidimensional Scaling (MDS) [4] and Latent ... |

50 | XGvis: Interactive Data Visualization with Multidimensional Scaling
- Buja, DF, et al.
- 2001
(Show Context)
Citation Context ...+FacultyStudent Distance (Figure c). To produce the plots in Table 7, all pairwise distances between the points in � � were computed and then projected into 2D using a classical, metric MDS algorithm =-=[1]-=-. Figure a) in Table 7 is the result of using the pairwise distances resulting from the unweighted, binary norm in MDS. There is no clear distinction between any of the clusters in 2 dimensions. In Fi... |

30 | H.: Distance metric learning with kernels
- Tsang, Kwok, et al.
(Show Context)
Citation Context ... exists that fulfills all constraints, the solution is typically not unique. We aim to select a matrix � � such that ��� �� � remains as close to an unweighted Euclidean metric as possible. Following =-=[8]-=-, we minimize the norm of the eigenvalues �� �� of � � . Since �� �� � ��� � ��� , this leads to the following optimization problem. � ��� � �� � � � � ����� �� � �� �� � � �� �� �� �� � � �� �� � �� ... |