## Classifier Combining: Analytical Results and Implications (1995)

Venue: | In Proceedings of the AAAI-96 Workshop on Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms |

Citations: | 15 - 0 self |

### BibTeX

@INPROCEEDINGS{Tumer95classifiercombining:,

author = {Kagan Tumer and Joydeep Ghosh},

title = {Classifier Combining: Analytical Results and Implications},

booktitle = {In Proceedings of the AAAI-96 Workshop on Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms},

year = {1995},

pages = {126--132},

publisher = {AAAI Press}

}

### OpenURL

### Abstract

Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper summarizes our recent theoretical results that quantify the improvements due to multiple classifier combining. Furthermore, we present an extension of this theory that leads to an estimate of the Bayes error rate. Practical aspects such as expressing the confidences in decisions and determining the best data partition/classifier selection are also discussed. Keywords: Linear combining, order statistics combining, Bayes error, error correlation, error reduction, ensemble networks, performance limits. Introduction Given infinite training data, consistent classifiers approximate the Bayesian decision boundaries to arbitrary precision, therefore providing similar generalizations (Geman, Bienenstock, & Doursat 1992). However, often only a limited portion of the pattern space is avai...