Results 1  10
of
15
Analysis of decision boundaries in linearly combined neural classifiers
 Pattern Recognition
, 1996
"... Abstract Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly i ..."
Abstract

Cited by 84 (22 self)
 Add to MetaCart
(Show Context)
Abstract Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in output space reduces the variance of the actual decision region boundaries around the optimum boundary. This result is valid under the assumption that the a posteriori probability distributions for each class are locally monotonic around the Bayes optimum boundary. In the absence of classifier bias, the error is shown to be proportional to the boundary variance, resulting in a simple expression for error rate improvements. In the presence of bias, the error reduction, expressed in terms of a bias reduction factor, is shown to be less than or equal to the reduction obtained in the absence of bias. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions and combining in output space. Combining Decision boundary Neural networks Hybrid networks Variance reduction. Pattern classification 1.
Linear and Order Statistics Combiners for Pattern Classification
 Combining Artificial Neural Nets
, 1999
"... Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification resul ..."
Abstract

Cited by 72 (8 self)
 Add to MetaCart
(Show Context)
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based nonlinear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Theoretical Foundations Of Linear And Order Statistics Combiners For Neural Pattern Classifiers
 IEEE Transactions on neural networks
, 1996
"... : Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
: Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and the order statistics combiners introduced in this paper. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. For linear combiners, we show that in the absence of classifier bias, the added classification error is proportional to the boundary variance. For nonlinear combiners, we show analytically that the selection of the median, the maximum and in general the ith order statistic improves classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions...
A neural network based hybrid system for detection, characterization and classification of shortduration oceanic signals
 IEEE Journal of Ocean Engineering
, 1992
"... AbstractAutomated identification and classification of shortduration oceanic signals obtained from passive sonar is a complex problem because of the large variability in both temporal and spectral characteristics even in signals obtained from the same source. This paper presents the design and eva ..."
Abstract

Cited by 28 (19 self)
 Add to MetaCart
(Show Context)
AbstractAutomated identification and classification of shortduration oceanic signals obtained from passive sonar is a complex problem because of the large variability in both temporal and spectral characteristics even in signals obtained from the same source. This paper presents the design and evaluation of a comprehensive classifier system for such signals. We first highlight the importance of selecting appropriate signal descriptors or feature vectors for highquality classification of realistic shortduration oceanic signals. Waveletbased feature extractors are shown to be superior to the more commonly used autoregressive coefficients and power spectral coefficients for this purpose. A variety of static neural network classifiers are evaluated and compared favorably with traditional statistical techniques for signal classification. We concentrate on those networks that are able to time out irrelevant input features and are less susceptible to noisy inputs, and introduce two new neuralnetwork based classifiers. Methods for combining the outputs of several classifiers to yield a more accurate labeling are proposed and evaluated based on the interpretation of network outputs as approximating posterior class probabilities. These methods lead to higher classification accuracy and also provide a mechanism for recognizing deviant signals and false alarms. Performance results are given for signals in the DARPA standard data set I. KeywordsNeural networks, pattern classification, passive sonar, shortduration oceanic signals, feature extraction, evidence combination. S I.
Classifier Ensembles: Select RealWorld Applications
, 2008
"... Broad classes of statistical classification algorithms have been developed and applied successfully to a wide range of real world domains. In general, ensuring that the particular classification algorithm matches the properties of the data is crucial in providing results that meet the needs of the p ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Broad classes of statistical classification algorithms have been developed and applied successfully to a wide range of real world domains. In general, ensuring that the particular classification algorithm matches the properties of the data is crucial in providing results that meet the needs of the particular application domain. One way in which the impact of this algorithm/application match can be alleviated is by using ensembles of classifiers, where a variety of classifiers (either different types of classifiers or different instantiations of the same classifier) are pooled before a final classification decision is made. Intuitively, classifier ensembles allow the different needs of a difficult problem to be handled by classifiers suited to those particular needs. Mathematically, classifier ensembles provide an extra degree of freedom in the classical bias/variance tradeoff, allowing solutions that would be difficult (if not impossible) to reach with only a single classifier. Because of these advantages, classifier ensembles have been applied to many difficult real world problems. In this paper, we survey select applications of ensemble methods to problems that have historically been most representative of the difficulties in classification. In particular, we survey applications of ensemble methods to remote sensing, person recognition, one vs. all recognition, and medicine.
Evidence Combination Techniques For Robust Classification Of ShortDuration Oceanic Signals
 In SPIE Conf. on Adaptive and Learning Systems, SPIE Proc
, 1992
"... . The identification and classification of underwater acoustic signals is an extremely difficult problem because of low SNRs and a high degree of variability in the signals emanated from the same type of sound source. Since different classification techniques have different inductive biases, a singl ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
(Show Context)
. The identification and classification of underwater acoustic signals is an extremely difficult problem because of low SNRs and a high degree of variability in the signals emanated from the same type of sound source. Since different classification techniques have different inductive biases, a single method cannot give the best results for all signal types. Rather, more accurate and robust classification can obtained by combining the outputs (evidences) of multiple classifiers based on neural network and/or statistical pattern recognition techniques. In this paper, four approaches to evidence combination are presented and compared using realistic oceanic data. The first method uses an entropybased weighting of individual classifier outputs. The second is based on combination of confidence factors in a manner similar to that used in MYCIN. The other two methods use majority voting and averaging, with little extra computational overhead. All these techniques give better results than tho...
Integration Of Neural Classifiers For Passive Sonar Signals
 In C.T. Leondes, editor, DSP Theory and Applications
, 1995
"... The identification and classification of underwater acoustic signals is an extremely difficult problem because of low SNRs and a high degree of variability in the signals emanated from the same type of sound source. Since different classification techniques have different inductive biases, a single ..."
Abstract

Cited by 15 (12 self)
 Add to MetaCart
The identification and classification of underwater acoustic signals is an extremely difficult problem because of low SNRs and a high degree of variability in the signals emanated from the same type of sound source. Since different classification techniques have different inductive biases, a single method cannot give the best results for all signal types. Rather, more accurate and robust classification can obtained by combining the outputs (evidences) of multiple classifiers based on neural network and/or statistical pattern recognition techniques. In this paper, five approaches are compared for integrating the decisions made by networks using sigmoidal activation functions exhibiting global responses with those made by localized basis function networks. These methods are compared using realistic oceanic data. The first method uses an entropybased weighting of individual classifier outputs. The second is based on combination of confidence factors in a manner similar to that used in MY...
Bayes Error Rate Estimation Using Classifier Ensembles
, 2003
"... The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating this rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating this rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian classconditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, are combined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensemble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult fourclass problem involving underwater acoustic data, and two problems from the Proben1 benchmarks. For data sets with known Bayes error, the combinerbased methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the reallife data sets for which the true Bayes rates are unknown.
Adaptive Kernel Classifiers for ShortDuration Oceanic Signals
 In Proceedings IEEE Conference on Neural Networks for Ocean Engineering
, 1991
"... Two kernel networks are presented for the classification of shortduration acoustic signals characterized by wavelet coefficients and signal duration. These networks combine the positive features of exemplarbased classifiers such as the Learned Vector Quantization method, and kernel classifiers usi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Two kernel networks are presented for the classification of shortduration acoustic signals characterized by wavelet coefficients and signal duration. These networks combine the positive features of exemplarbased classifiers such as the Learned Vector Quantization method, and kernel classifiers using radial basis functions. Results on the DARPA Data Set 1 show that these networks compare favorably with other classification techniques, with almost 100% accuracy achievable in identifying test signals that are similar to the training signals. A method of combining the outputs of several classifiers to yield a more accurate labelling is proposed based on the interpretation of network outputs as approximating posterior class probabilities. This also provides a technique for recognizing deviant signals and false alarms. 1. Introduction. Underwater transient signals obtained from passive sonar contain valuable clues for source identification in noisy and dissipative environments. However, th...
ANALYSIS OF DECISION BOUNDARIES IN LINEARLY COMBINED NEURAL CLASSIFIERS Kagan Tumer and Joydeep Ghosh
 Pattern Recognition
, 1996
"... : Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in outpu ..."
Abstract
 Add to MetaCart
: Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in output space reduces the variance of the actual decision region boundaries around the optimum boundary. This result is valid under the assumption that the a posteriori probability distributions for each class are locally monotonic around the Bayes optimum boundary. In the absence of classifier bias, the error is shown to be proportional to the boundary variance, resulting in a simple expression for error rate improvements. In the presence of bias, the error reduction, expressed in terms of a bias reduction factor, is shown to be less than or equal to the reduction obtained in the absence of bias. The analysis presented here facilitates the understanding of the relationships among error rates, cla...