Results 1  10
of
866
Enhanced statistical rankings via targeted data collection
"... Given a graph where vertices represent alternatives and pairwise comparison data, yij, is given on the edges, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with pairwise comparisons. We study the depe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Given a graph where vertices represent alternatives and pairwise comparison data, yij, is given on the edges, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with pairwise comparisons. We study
Feature selection for highdimensional data: a fast correlationbased filter solution
 In: Proceedings of the 20th International Conferences on Machine Learning
, 2003
"... Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing ..."
Abstract

Cited by 276 (12 self)
 Add to MetaCart
. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using realworld data of high dimensionality. 1.
An extension on ―statistical comparisons of classifiers over multiple data sets‖ for all pairwise comparisons
 Journal of Machine Learning Research
"... In a recently published paper in JMLR, Demˇsar (2006) recommends a set of nonparametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic ..."
Abstract

Cited by 159 (37 self)
 Add to MetaCart
In a recently published paper in JMLR, Demˇsar (2006) recommends a set of nonparametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic
Label Ranking by Learning Pairwise Preferences
"... Preference learning is an emerging topic that appears in different guises in the recent literature. This work focuses on a particular learning scenario called label ranking, where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning s ..."
Abstract

Cited by 89 (20 self)
 Add to MetaCart
such a mapping, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data using a natural extension of pairwise classification. A ranking is then derived from the preference relation thus obtained by means of a ranking procedure, whereby different
Learning Mallows Models with Pairwise Preferences
"... Learning preference distributions is a key problem in many areas (e.g., recommender systems, IR, social choice). However, many existing methods require restrictive data models for evidence about user preferences. We relax these restrictions by considering as data arbitrary pairwise comparisons—the f ..."
Abstract

Cited by 76 (9 self)
 Add to MetaCart
Learning preference distributions is a key problem in many areas (e.g., recommender systems, IR, social choice). However, many existing methods require restrictive data models for evidence about user preferences. We relax these restrictions by considering as data arbitrary pairwise comparisons
Analysis of PairWise Comparisons
, 2005
"... For biological arrays, performing multiple experimental repeats is a common way to improve certainty of gene identification. But this is offset by the high cost of each experiment. In a typical experiment, n independent measurements (e.g., data from n separate gene expression arrays (Affymetrix, 200 ..."
Abstract
 Add to MetaCart
comparisons derivable from such data. We show the 2n − 1 independent comparisons available from among the n 2 possibilities perform the best. 1
Sampling for Pairwise and Multiple Comparisons
"... While the distributionfree nature of permutation tests makes them the most appropriate method for hypothesis testing under a wide range of conditions, their computational demands can be runtime prohibitive, especially if samples are not very small and/or many tests must be conducted (e.g. all pairw ..."
Abstract
 Add to MetaCart
pairwise comparisons). This paper presents statistical code that performs continuousdata permutation tests under such conditions very quickly often more than an order of magnitude faster than widely available commercial alternatives when many tests must be performed and some of the sample pairs contain a
Classification Of Gene Expression Data by Pairwise Comparisons
"... In this report, we discuss a strategy to produce simple and easy to understand classifiers for polychotomous classification of gene expression data. In particular, we propose to decompose the Kclass prediction problem into the () possible 2class ones, solve those and combine in appropriate manner ..."
Abstract
 Add to MetaCart
In this report, we discuss a strategy to produce simple and easy to understand classifiers for polychotomous classification of gene expression data. In particular, we propose to decompose the Kclass prediction problem into the () possible 2class ones, solve those and combine in appropriate manner
Evaluation Of Gaussian Processes And Other Methods For NonLinear Regression
, 1996
"... This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are ..."
Abstract

Cited by 165 (17 self)
 Add to MetaCart
are accounted for. The framework allows for estimation of generalisation performance as well as statistical tests of significance for pairwise comparisons. Two experimental designs are recommended and supported by the DELVE software environment. Two new nonparametric Bayesian learning methods relying
Ranking by Pairwise Comparison: A Note on Risk Minimization
 In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZIEEE04
, 2004
"... In this paper we consider the problem of learning ranking functions in a supervised manner. A ranking function is a mapping from instances to rankings over a finite number of labels and can thus be seen as an extension of a classification function. Our learning method, referred to as ranking by pair ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
by pairwise comparison (RPC), is a twostep procedure. First, a valued preference structure is induced from given preference data, using a natural extension of socalled pairwise classification. A ranking is then derived from that preference structure by means of a simple scoring function. It is shown that
Results 1  10
of
866