Results 1  10
of
12,654
Prediction Intervals for Class Probabilities
, 2007
"... Prediction intervals for class probabilities are of interest in machine learning because they can quantify the uncertainty about the class probability estimate for a test instance. The idea is that all likely class probability values of the test instance are included, with a prespecified confidence ..."
Abstract
 Add to MetaCart
Prediction intervals for class probabilities are of interest in machine learning because they can quantify the uncertainty about the class probability estimate for a test instance. The idea is that all likely class probability values of the test instance are included, with a pre
Visualizing Class Probability Estimators
 In Lecture Notes in Artificial Intelligence 2838
, 2003
"... Inducing classi ers that make accurate predictions on future data is a driving force for research in inductive learning. However, also of importance to the users is how to gain information from the models produced. Unfortunately, some of the most powerful inductive learning algorithms generate ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
of its class probability estimates. It requires the classi er to generate class probabilities but most practical algorithms are able to do so (or can be modi ed to this end).
Active Sampling for Class Probability Estimation and Ranking
 Machine Learning
, 2004
"... In many costsensitive environments class probability estimates are used by decision makers to evaluate the expected utility from a set of alternatives. Supervised learning can be used to build class probability estimates; however, it often is very costly to obtain training data with class labels ..."
Abstract

Cited by 78 (9 self)
 Add to MetaCart
In many costsensitive environments class probability estimates are used by decision makers to evaluate the expected utility from a set of alternatives. Supervised learning can be used to build class probability estimates; however, it often is very costly to obtain training data with class
Boosted classification trees and class probability/quantile estimation
 Journal of Machine Learning Research
, 2006
"... The standard by which binary classifiers are usually judged, misclassification error, assumes equal costs of misclassifying the two classes or, equivalently, classifying at the 1/2 quantile of the conditional class probability function P[y = 1x]. Boosted classification trees are known to perform qu ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
The standard by which binary classifiers are usually judged, misclassification error, assumes equal costs of misclassifying the two classes or, equivalently, classifying at the 1/2 quantile of the conditional class probability function P[y = 1x]. Boosted classification trees are known to perform
The use of the area under the ROC curve in the evaluation of machine learning algorithms
 PATTERN RECOGNITION
, 1997
"... In this paper we investigate the use of the area under the receiver operating characteristic (ROC) curve (AUC) as a performance measure for machine learning algorithms. As a case study we evaluate six machine learning algorithms (C4.5, Multiscale Classifier, Perceptron, Multilayer Perceptron, kNe ..."
Abstract

Cited by 685 (3 self)
 Add to MetaCart
sensitivity in Analysis of Variance (ANOVA) tests; a standard error that decreased as both AUC and the number of test samples increased; decision threshold independent; and it is invariant to a priori class probabilities. The paper concludes with the recommendation that AUC be used in preference to overall
Solving multiclass learning problems via errorcorrecting output codes
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1995
"... Multiclass learning problems involve nding a de nition for an unknown function f(x) whose range is a discrete set containing k>2values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i;f(x i)i. Existing approaches to multiclass l ..."
Abstract

Cited by 726 (8 self)
 Add to MetaCart
thatlike the other methodsthe errorcorrecting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that errorcorrecting output codes provide a generalpurpose method for improving the performance of inductive learning programs on multiclass
Active Learning for Class Probability Estimation and Ranking
 In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI2001
, 2001
"... For many supervised learning tasks it is very costly to produce training data with class labels. Active learning acquires data incrementally, at each stage using the model learned so far to help identify especially useful additional data for labeling. Existing empirical active learning approac ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
approaches have focused on learning classifiers. However, many applications require estimations of the probability of class membership, or scores that can be used to rank new cases. We present a new active learning method for class probability estimation (CPE) and ranking. BOOTSTRAPLV selects new data
On ClassProbability Estimates and CostSensitive Evaluation of Classifiers
 In Workshop on CostSensitive Learning at the Seventeenth International Conference on Machine Learning (WCSL at ICML2000
, 2000
"... This paper addresses two costsensitive learning methodology issues. First, we ask the question of whether Bagging is always an appropriate procedure to compute accurate classprobability estimates for costsensitive classification. Second, we will point the reader to a potential source of erroneous ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper addresses two costsensitive learning methodology issues. First, we ask the question of whether Bagging is always an appropriate procedure to compute accurate classprobability estimates for costsensitive classification. Second, we will point the reader to a potential source
Divergence measures based on the Shannon entropy
 IEEE Transactions on Information theory
, 1991
"... AbstractA new class of informationtheoretic divergence measures based on the Shannon entropy is introduced. Unlike the wellknown Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, ..."
Abstract

Cited by 666 (0 self)
 Add to MetaCart
AbstractA new class of informationtheoretic divergence measures based on the Shannon entropy is introduced. Unlike the wellknown Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly
Neural Network Classification and Prior Class Probabilities
, 1998
"... A commonly encountered problem in MLP (multilayer perceptron) classification problems is related to the prior probabilities of the individual classes  if the number of training examples that correspond to each class varies significantly between the classes, then it may be harder for the network to ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
A commonly encountered problem in MLP (multilayer perceptron) classification problems is related to the prior probabilities of the individual classes  if the number of training examples that correspond to each class varies significantly between the classes, then it may be harder for the network
Results 1  10
of
12,654