Results 1  10
of
748,287
Thresholding for Making Classifiers Costsensitive
"... In this paper we propose a very simple, yet general and effective method to make any costinsensitive classifiers (that can produce probability estimates) costsensitive. The method, called Thresholding, selects a proper threshold from training instances according to the misclassification cost. Simi ..."
Abstract
 Add to MetaCart
In this paper we propose a very simple, yet general and effective method to make any costinsensitive classifiers (that can produce probability estimates) costsensitive. The method, called Thresholding, selects a proper threshold from training instances according to the misclassification cost
The Foundations of CostSensitive Learning
 In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence
, 2001
"... This paper revisits the problem of optimal learning and decisionmaking when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically i ..."
Abstract

Cited by 398 (6 self)
 Add to MetaCart
incoherent. For the twoclass case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal costsensitive classification decisions using a classifier learned by a standard noncostsensitive learning method. However, we then argue
MetaCost: A General Method for Making Classifiers CostSensitive
 In Proceedings of the Fifth International Conference on Knowledge Discovery and Data Mining
, 1999
"... Research in machine learning, statistics and related fields has produced a wide variety of algorithms for classification. However, most of these algorithms assume that all errors have the same cost, which is seldom the case in KDD prob lems. Individually making each classification learner costsensi ..."
Abstract

Cited by 411 (4 self)
 Add to MetaCart
costsensitive is laborious, and often nontrivial. In this paper we propose a principled method for making an arbitrary classifier costsensitive by wrapping a costminimizing procedure around it. This procedure, called MetaCost, treats the underlying classifier as a black box, requiring no knowledge of its
CostSensitive Tree of Classifiers
"... Recently, machine learning algorithms have successfully entered largescale realworld industrial applications (e.g. search engines and email spam filters). Here, the CPU cost during testtime must be budgeted and accounted for. In this paper, we address the challenge of balancing the testtime cost ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
and the classifier accuracy in a principled fashion. The testtime cost of a classifier is often dominated by the computation required for feature extractionâ€”which can vary drastically across features. We decrease this extraction time by constructing a tree of classifiers, through which test inputs traverse along
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less
On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes
, 2001
"... We compare discriminative and generative learning as typified by logistic regression and naive Bayes. We show, contrary to a widely held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size is i ..."
Abstract

Cited by 513 (8 self)
 Add to MetaCart
We compare discriminative and generative learning as typified by logistic regression and naive Bayes. We show, contrary to a widely held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2000
"... We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a marginbased binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class ..."
Abstract

Cited by 560 (20 self)
 Add to MetaCart
is compared against all others, or in which all pairs of classes are compared to each other, or in which output codes with errorcorrecting properties are used. We propose a general method for combining the classifiers generated on the binary problems, and we prove a general empirical multiclass loss bound
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
, 2008
"... ..."
Thresholding of statistical maps in functional neuroimaging using the false discovery rate
 Neuroimage
, 2002
"... Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a longstanding problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures for multi ..."
Abstract

Cited by 494 (8 self)
 Add to MetaCart
Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a longstanding problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures
Making LargeScale Support Vector Machine Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract

Cited by 620 (1 self)
 Add to MetaCart
algorithmic and computational results developed for SVM light V2.0, which make largescale SVM training more practical. The results give guidelines for the application of SVMs to large domains.
Results 1  10
of
748,287