Results 1  10
of
19
Performance evaluation of pattern classifiers for handwritten character recognition
 International Journal on Document Analysis and Recognition
, 2002
"... Abstract. This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning v ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Abstract. This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning vector quantization) classifier. They are efficient in that high accuracies can be achieved at moderate memory space and computation cost. The performance is measured in terms of classification accuracy, sensitivity to training sample size, ambiguity rejection, and outlier resistance. The outlier resistance of neural classifiers is enhanced by training with synthesized outlier data. The classifiers are tested on a large data set extracted from NIST SD19. As results, the test accuracies of the evaluated classifiers are comparable to or higher than those of the nearest neighbor (1NN) rule and regularized discriminant analysis (RDA). It is shown that neural classifiers are more susceptible to small sample size than MQDF, although they yield higher accuracies on large sample size. As a neural classifier, the polynomial classifier (PC) gives the highest accuracy and performs best in ambiguity rejection. On the other hand, MQDF is superior in outlier rejection even though it is not trained with outlier data. The results indicate that pattern classifiers have complementary advantages and they should be appropriately combined to achieve higher performance.
Reject Option with Multiple Thresholds
 Pattern Recognition
, 2000
"... this paper, we investigate the effects of estimate errors on Chow's rule, and propose the use of multiple reject thresholds related to the data classes. The reported experimental results show that such classrelated reject thresholds provide an errorreject tradeoff better than the one in Chow's ru ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
this paper, we investigate the effects of estimate errors on Chow's rule, and propose the use of multiple reject thresholds related to the data classes. The reported experimental results show that such classrelated reject thresholds provide an errorreject tradeoff better than the one in Chow's rule
Classification with a reject option using a hinge loss
, 2006
"... We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propo ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φrisk—also minimizes the risk. We also study the rate at which the φrisk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y = 1X) is unlikely to be close to certain critical values.
Classification with reject option
 Canad. J. Statist
, 2006
"... This paper studies twoclass (or binary) classification of elements X in Rk that allows for a reject option. Based on n independent copies of the pair of random variables (X, Y) with X ∈ Rk and Y ∈ {0, 1}, we consider classifiers f(X) that render three possible outputs: 0, 1 and R. The option R expr ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper studies twoclass (or binary) classification of elements X in Rk that allows for a reject option. Based on n independent copies of the pair of random variables (X, Y) with X ∈ Rk and Y ∈ {0, 1}, we consider classifiers f(X) that render three possible outputs: 0, 1 and R. The option R expresses doubt and is to be used for few observations that are hard to classify in an automatic way. Chow (1970) derived the optimal rule minimizing the risk P{f(X) = Y, f(X) = R} + dP{f(X) = R}. This risk function subsumes that the cost of making a wrong decision equals 1 and that of utilizing the reject option is d. We show that the classification problem hinges on the behavior of the regression function η(x) = E(Y X = x) near d and 1 − d. (Here d ∈ [0, 1/2] as the other cases turn out to be trivial.) Classification rules can be categorized into plugin estimators and empirical risk minimizers. Both types are considered here and we prove that the rates of convergence of the risk of any estimate depends on P{η(X) − d  ≤ δ} + P{η(X) − (1 − d)  ≤ δ} and on the quality of the estimate for η or an appropriate measure of the size of the class of classifiers, in case of plugin rules and empirical risk minimizers, respectively. We extend the mathematical framework even further by differentiating between costs associated with the two possible errors: predicting f(X) = 0 whilst Y = 1 and predicting f(X) = 1 whilst Y = 0. Such situations are common in, for instance, medical studies where misclassifying a sick patient as healthy is worse than the opposite. Running title: Classification with reject option
Analysis of ErrorReject Tradeoff in Linearly Combined Classifiers
 Pattern Recognition
, 2002
"... In this paper, a framework for the analysis of the errorreject tradeoff in linearly combined classifiers is proposed. We start from a framework developed by Tumer and Ghosh [1,2]. We extend this framework and analyse some hypotheses under which the linear combination of classifier outputs can impr ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
In this paper, a framework for the analysis of the errorreject tradeoff in linearly combined classifiers is proposed. We start from a framework developed by Tumer and Ghosh [1,2]. We extend this framework and analyse some hypotheses under which the linear combination of classifier outputs can improve the errorreject tradeoff of the individual classifiers. Experiments that support some of the analytical results are reported.
Costsensitive learning in Support Vector Machines
 In VIII Convegno Associazione Italiana per L’Intelligenza Artificiale
, 2002
"... In this paper, a costsensitive learning method for support vector machine (SVM) classifiers is proposed. We focus on a particular case of costsensitive problems, namely, classification with reject option. Standard learning algorithms, the one for SVMs included, are not costsensitive. In particula ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, a costsensitive learning method for support vector machine (SVM) classifiers is proposed. We focus on a particular case of costsensitive problems, namely, classification with reject option. Standard learning algorithms, the one for SVMs included, are not costsensitive. In particular, they can not handle the reject option. However, we show that, under the framework of the structural risk minimisation induction principle, on which standard SVMs are based, the rejection region should be determined during the training phase of a classifier, by the learning algorithm. We apply this approach to develop a costsensitive SVM classifier, by following Vapnik's maximum margin method to the derivation of standard SVMs.
Lasso type classifiers with a reject option
 Electronic Journal of Statistics
, 2007
"... Abstract: We consider the problem of binary classification where one can, for a particular cost, choose not to classify an observation. We present a simple proof for the oracle inequality for the excess risk of structural risk minimizers using a lasso type penalty. ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract: We consider the problem of binary classification where one can, for a particular cost, choose not to classify an observation. We present a simple proof for the oracle inequality for the excess risk of structural risk minimizers using a lasso type penalty.
The Behavior Knowledge Space Fusion Method: Analysis of Generalization Error and Strategies for Performance Improvement
 In Proc. Int. Workshop on Multiple Classifier Systems (LNCS 2709
, 2003
"... In the pattern recognition literature, Huang and Suen introduced the "multinomial" rule for fusion of multiple classifiers under the name of Behavior Knowledge Space (BKS) method [1]. This classifier fusion method can provide very good performances if large and representative data sets are availa ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In the pattern recognition literature, Huang and Suen introduced the "multinomial" rule for fusion of multiple classifiers under the name of Behavior Knowledge Space (BKS) method [1]. This classifier fusion method can provide very good performances if large and representative data sets are available.
Error Rejection in Linearly Combined Multiple Classifiers
 In Proc. Int. Workshop on Multiple Classifier Systems (LNCS 2096
, 2001
"... In this paper, the errorreject tradeoff of linearly combined multiple classifiers is analysed in the framework of the minimum risk theory. Theoretical analysis described in [12,13] is extended for handling reject option and the optimality of the errorreject tradeoff is analysed under the assu ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, the errorreject tradeoff of linearly combined multiple classifiers is analysed in the framework of the minimum risk theory. Theoretical analysis described in [12,13] is extended for handling reject option and the optimality of the errorreject tradeoff is analysed under the assumption of independence among the errors of the individual classifiers. Improvements of the errorreject tradeoff obtained by linear classifier combination are quantified. Finally, a method for computing the coefficients of the linear combination and the value of the reject threshold is proposed. Experimental results on four different data sets are reported.
A combining strategy for illdefined problems
"... In this paper we present a combining strategy to cope with the problem of classification in illdefined domains. In these cases, even though a particular target class may be sampled in a representative manner, an outlier class may be poorly sampled, or new outlier classes may occur that have not bee ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we present a combining strategy to cope with the problem of classification in illdefined domains. In these cases, even though a particular target class may be sampled in a representative manner, an outlier class may be poorly sampled, or new outlier classes may occur that have not been considered during training. This may have a considerable impact on classification performance. The objective of a classifier in this situation is to utilise all known information in discriminating, and to remain as robust as possible to changing conditions. A classification scheme is presented that deals with this problem, consisting of a sequential combination of a oneclass and multiclass classifier. We show that it can outperform the traditional classifier with rejectoption scheme, locally selecting/training models for the purpose of optimising the classification and rejection performance.