Results 1 
7 of
7
Universal wellcalibrated algorithm for online classification
 Learning Theory and Kernel Machines: Sixteenth Annual Conference on Learning Theory and Seventh Kernel Workshop, volume 2777 of Lecture Notes in Artificial Intelligence
, 2003
"... We study the problem of online classification in which the prediction algorithm, for each “significance level ” δ, is required to output as its prediction a range of labels (intuitively, those labels deemed compatible with the available data at the level δ) rather than just one label; as usual, the ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We study the problem of online classification in which the prediction algorithm, for each “significance level ” δ, is required to output as its prediction a range of labels (intuitively, those labels deemed compatible with the available data at the level δ) rather than just one label; as usual, the examples are assumed to be generated independently from the same probability distribution P. The prediction algorithm is said to be “wellcalibrated ” for P and δ if the longrun relative frequency of errors does not exceed δ almost surely w.r. to P. For wellcalibrated algorithms we take the number of “uncertain ” predictions (i.e., those containing more than one label) as the principal measure of predictive performance. The main result of this paper is the construction of a prediction algorithm which, for any (unknown) P and any δ: (a) makes errors independently and with probability δ at every trial (in particular, is wellcalibrated for P and δ); (b) makes in the long run no more uncertain predictions than any other prediction algorithm that is wellcalibrated for P and δ; (c) processes example n in time O(logn).
Testing exchangeability online
 Proceedings of the Twentieth International Conference on Machine Learning
, 2003
"... praktiqeskie vyvody teorii vero�tnoste� mogut bytь obosnovany v kaqestve sledstvi� gipotez o predelьno� pri dannyh ograniqeni�h sloжnosti izuqaemyh �vleni� ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
praktiqeskie vyvody teorii vero�tnoste� mogut bytь obosnovany v kaqestve sledstvi� gipotez o predelьno� pri dannyh ograniqeni�h sloжnosti izuqaemyh �vleni�
doi:10.1093/comjnl/bxl065 Hedging Predictions in Machine Learning The Second Computer Journal Lecture
, 2007
"... Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This article describes a new technique for ‘hedging ’ the predictions output by many such algorithms, including support vector machines, kernel ridge regressi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This article describes a new technique for ‘hedging ’ the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours and by many other stateoftheart methods. The hedged predictions for the labels of new objects include quantitative measures of their own accuracy and reliability. These measures are provably valid under the assumption of randomness, traditional in machine learning: the objects and their labels are assumed to be generated independently from the same probability distribution. In particular, it becomes possible to control (up to statistical fluctuations) the number of erroneous predictions by selecting a suitable confidence level. Validity being achieved automatically, the remaining goal of hedged prediction is efficiency: taking full account of the new objects ’ features and other available information to produce as accurate predictions as possible. This can be done successfully using the powerful machinery of modern machine learning.
Sparse Conformal Predictors
, 902
"... Conformal predictors, introduced by Vovk et al. [16], serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. In the present paper, we propose a novel method for constructing prediction intervals for the response variable in multi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Conformal predictors, introduced by Vovk et al. [16], serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. In the present paper, we propose a novel method for constructing prediction intervals for the response variable in multivariate linear models. The main emphasis is on sparse linear models, where only few of the covariates have significant influence on the response variable even if their number is very large. Our approach is based on combining the principle of conformal prediction with the ℓ1 penalized least squares estimator (LASSO). The resulting confidence set depends on a parameter ε> 0 and has a coverage probability larger than or equal to 1 − ε. The numerical experiments reported in the paper show that the length of the confidence set is small. Furthermore, as a byproduct of the proposed approach, we provide a datadriven procedure for choosing the LASSO penalty. The selection power of the method is illustrated on simulated data.
On the Flexibility of Theoretical Models for Pattern Recognition
, 2005
"... This thesis is devoted to relaxing certain theoretical assumptions in pattern recognition models. In pattern recognition a predictor is trying to guess a discrete label of some object (usually a real vector), based on given examples of objectlabel pairs. Pattern recognition was ..."
Abstract
 Add to MetaCart
This thesis is devoted to relaxing certain theoretical assumptions in pattern recognition models. In pattern recognition a predictor is trying to guess a discrete label of some object (usually a real vector), based on given examples of objectlabel pairs. Pattern recognition was
Multiple Kernel Learning for Efficient Conformal Predictions
"... The Conformal Predictions framework is a recent development in machine learning to associate reliable measures of confidence with results in classification and regression. This framework is founded on the principles of algorithmic randomness (Kolmogorov complexity), transductive inference and hypoth ..."
Abstract
 Add to MetaCart
The Conformal Predictions framework is a recent development in machine learning to associate reliable measures of confidence with results in classification and regression. This framework is founded on the principles of algorithmic randomness (Kolmogorov complexity), transductive inference and hypothesis testing. While the formulation of the framework guarantees validity, the efficiency of the framework depends greatly on the choice of the classifier and appropriate kernel functions or parameters. While this framework has extensive potential to be useful in several applications, the lack of efficiency can limit its usability. In this paper, we propose a novel Multiple Kernel Learning (MKL) methodology to maximize efficiency in the CP framework. This method is validated using the kNearest Neighbors classifier on a cardiac patient dataset, and our results show promise in using MKL to obtain efficient conformal predictors that can be practically useful. 1