Results

**11 - 15**of**15**### Approximation Schemes for . . .

, 2011

"... In correlation clustering, given similarity or dissimilarity information for all pairs of data items, the goal is to find a clustering of the items into similarity classes, with the fewest inconsistencies with the input. This problem is hard to approximate in general but we give arbitrarily good app ..."

Abstract
- Add to MetaCart

In correlation clustering, given similarity or dissimilarity information for all pairs of data items, the goal is to find a clustering of the items into similarity classes, with the fewest inconsistencies with the input. This problem is hard to approximate in general but we give arbitrarily good approximation algorithms (PTASs) for two interesting special cases: when there are few clusters, and when the input is generated from a natural noisy model. In the feedback arc set problem in tournaments, given comparison information (a better than b) for all pairs of data items, the goal is to find a ranking of the items with the fewest inconsistencies with the input. We give the first PTAS for this problem. We then extend our techniques to a more general class of problems called fragile dense problems.

### Finding The Most Probable Ranking of Objects with Probabilistic Pairwise Preferences

- 10TH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION
, 2009

"... This paper discusses the ranking of a set of objects when a possibly inconsistent set of pairwise preferences is given. We consider the task of ranking objects when pairwise preferences not only can contradict each other, but in general are not binary- meaning, for each pair of objects the preferenc ..."

Abstract
- Add to MetaCart

This paper discusses the ranking of a set of objects when a possibly inconsistent set of pairwise preferences is given. We consider the task of ranking objects when pairwise preferences not only can contradict each other, but in general are not binary- meaning, for each pair of objects the preference is represented by a pair of non-negative numbers that sum up to one and can be viewed as a confidence in our belief that one object is preferable to the other in the absence of any other information. We propose a probability function on the sequence of objects that includes non-binary preferences and evaluate methods for finding the most probable ranking for this model using it to rank results of a Microsoft On-line Handwriting Recognizer.

### 1Reduction from Cost-sensitive Ordinal Ranking to Weighted Binary Classification

"... We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original ex-amples, learning a binary classifier on the extended examples with any binary classi-fication algorithm, and constructing a ranker ..."

Abstract
- Add to MetaCart

(Show Context)
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original ex-amples, learning a binary classifier on the extended examples with any binary classi-fication algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only to design good ordinal ranking algorithms based on well-tuned binary classi-fication approaches, but also to derive new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordi-nal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generaliza-tion performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance as well as improved listwise ranking performance. 1

### Reviewers

, 2009

"... Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather th ..."

Abstract
- Add to MetaCart

(Show Context)
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than pre-diction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for pre-dicting individual customer choices from the vast amount of user generated