Results 1  10
of
1,521,596
MetaCost: A General Method for Making Classifiers CostSensitive
 In Proceedings of the Fifth International Conference on Knowledge Discovery and Data Mining
, 1999
"... Research in machine learning, statistics and related fields has produced a wide variety of algorithms for classification. However, most of these algorithms assume that all errors have the same cost, which is seldom the case in KDD prob lems. Individually making each classification learner costsensi ..."
Abstract

Cited by 411 (4 self)
 Add to MetaCart
costsensitive is laborious, and often nontrivial. In this paper we propose a principled method for making an arbitrary classifier costsensitive by wrapping a costminimizing procedure around it. This procedure, called MetaCost, treats the underlying classifier as a black box, requiring no knowledge of its
The Foundations of CostSensitive Learning
 In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence
, 2001
"... This paper revisits the problem of optimal learning and decisionmaking when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically i ..."
Abstract

Cited by 398 (6 self)
 Add to MetaCart
incoherent. For the twoclass case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal costsensitive classification decisions using a classifier learned by a standard noncostsensitive learning method. However, we then argue
Thresholding for Making Classifiers Costsensitive
"... In this paper we propose a very simple, yet general and effective method to make any costinsensitive classifiers (that can produce probability estimates) costsensitive. The method, called Thresholding, selects a proper threshold from training instances according to the misclassification cost. Simi ..."
Abstract
 Add to MetaCart
In this paper we propose a very simple, yet general and effective method to make any costinsensitive classifiers (that can produce probability estimates) costsensitive. The method, called Thresholding, selects a proper threshold from training instances according to the misclassification cost
CostSensitive Tree of Classifiers
"... Recently, machine learning algorithms have successfully entered largescale realworld industrial applications (e.g. search engines and email spam filters). Here, the CPU cost during testtime must be budgeted and accounted for. In this paper, we address the challenge of balancing the testtime cost ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
and the classifier accuracy in a principled fashion. The testtime cost of a classifier is often dominated by the computation required for feature extractionâ€”which can vary drastically across features. We decrease this extraction time by constructing a tree of classifiers, through which test inputs traverse along
An Empirical Study of MetaCost using Boosting Algorithms
 In: Proceedings of the Eleventh European Conference on Machine Learning
"... MetaCost is a recently proposed procedure that converts an errorbased learning algorithm into a costsensitive algorithm. This paper investigates two important issues centered on the procedure which were ignored in the paper proposing MetaCost. First, no comparison was made between MetaCost's ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
MetaCost is a recently proposed procedure that converts an errorbased learning algorithm into a costsensitive algorithm. This paper investigates two important issues centered on the procedure which were ignored in the paper proposing MetaCost. First, no comparison was made between MetaCost
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less
Estimating Continuous Distributions in Bayesian Classifiers
 In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence
, 1995
"... When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality ..."
Abstract

Cited by 489 (2 self)
 Add to MetaCart
the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional
A training algorithm for optimal margin classifiers
 PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract

Cited by 1848 (44 self)
 Add to MetaCart
is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leaveoneout method and the VC
CostSensitive Learning by CostProportionate Example Weighting
, 2003
"... We propose and evaluate a family of methods for converting classifier learning algorithms and classification theory into costsensitive algorithms and theory. The proposed conversion is based on costproportionate weighting of the training examples, which can be realized either by feeding the weight ..."
Abstract

Cited by 155 (14 self)
 Add to MetaCart
We propose and evaluate a family of methods for converting classifier learning algorithms and classification theory into costsensitive algorithms and theory. The proposed conversion is based on costproportionate weighting of the training examples, which can be realized either by feeding
Hierarchically Classifying Documents Using Very Few Words
, 1997
"... The proliferation of topic hierarchies for text documents has resulted in a need for tools that automatically classify new documents within such hierarchies. Existing classification schemes which ignore the hierarchical structure and treat the topics as separate classes are often inadequate in text ..."
Abstract

Cited by 521 (8 self)
 Add to MetaCart
The proliferation of topic hierarchies for text documents has resulted in a need for tools that automatically classify new documents within such hierarchies. Existing classification schemes which ignore the hierarchical structure and treat the topics as separate classes are often inadequate in text
Results 1  10
of
1,521,596