• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,983
Next 10 →

Cluster Ensembles - A Knowledge Reuse Framework for Combining Multiple Partitions

by Alexander Strehl, Joydeep Ghosh, Claire Cardie - Journal of Machine Learning Research , 2002
"... This paper introduces the problem of combining multiple partitionings of a set of objects into a single consolidated clustering without accessing the features or algorithms that determined these partitionings. We first identify several application scenarios for the resultant 'knowledge reuse&ap ..."
Abstract - Cited by 603 (20 self) - Add to MetaCart
' framework that we call cluster ensembles. The cluster ensemble problem is then formalized as a combinatorial optimization problem in terms of shared mutual information. In addition to a direct maximization approach, we propose three effective and efficient techniques for obtaining high-quality combiners

The "test and Select" Approach to Ensemble Combination

by Amanda J. C. Sharkey, Noel E. Sharkey , Uwe Gerecke, G. O. Chandroth , 2000
"... The performance of neural nets can be improved through the use of ensembles of redundant nets. In this paper, some of the available methods of ensemble creation are reviewed and the "test and select" methodolology for ensemble creation is considered. This approach involves testing pote ..."
Abstract - Cited by 28 (0 self) - Add to MetaCart
potential ensemble combinations on a validation set, and selecting the best performing ensemble on this basis, which is then tested on a final test set. The application of this methodology, and of ensembles in general, is explored further in two case studies. The first case study is of fault diagnosis

Neural network ensembles, cross validation, and active learning

by Anders Krogh, Jesper Vedelsby - Neural Information Processing Systems 7 , 1995
"... Learning of continuous valued functions using neural network en-sembles (committees) can give improved accuracy, reliable estima-tion of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members aver-aged over unlabeled data, so it qua ..."
Abstract - Cited by 479 (6 self) - Add to MetaCart
it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combina-tion with cross-validation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble cross-validation can sometimes improve performance. It is shown how to estimate

A Survey on Ensemble Combination Schemes of Neural Network

by Varuna Tyagi, Anju Mishra
"... The Neural network ensembles are the most effective approach to improve the neural network system. The combination of neural networks can provide more accurate result than a single network. The simple averaging, weighted averaging, majority voting and ranking are commonly used combination strategies ..."
Abstract - Add to MetaCart
The Neural network ensembles are the most effective approach to improve the neural network system. The combination of neural networks can provide more accurate result than a single network. The simple averaging, weighted averaging, majority voting and ranking are commonly used combination

Ensemble Tracking

by Shai Avidan - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2007
"... We consider tracking as a binary classification problem, where an ensemble of weak classifiers is trained on-line to distinguish between the object and the background. The ensemble of weak classifiers is combined into a strong classifier using AdaBoost. The strong classifier is then used to label pi ..."
Abstract - Cited by 328 (2 self) - Add to MetaCart
We consider tracking as a binary classification problem, where an ensemble of weak classifiers is trained on-line to distinguish between the object and the background. The ensemble of weak classifiers is combined into a strong classifier using AdaBoost. The strong classifier is then used to label

Meteorology Research & Development Medium-range multi-model ensemble combination and calibration

by Christine Johnson, Richard Swinbank
"... c©Crown Copyright Medium-range multi-model ensemble combination and calibration ..."
Abstract - Add to MetaCart
c©Crown Copyright Medium-range multi-model ensemble combination and calibration

Evolutionary Ensembles: Combining Learning Agents using Genetic Algorithms

by Jared Sylvester, et al. , 2005
"... Ensembles of classifiers are often used to achieve accuracy greater than any single classifier. The predictions of these classifiers are typically combined together by uniform or weighted voting. In this paper, we approach the ensembles construction under a multi-agent framework. Each individual age ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
Ensembles of classifiers are often used to achieve accuracy greater than any single classifier. The predictions of these classifiers are typically combined together by uniform or weighted voting. In this paper, we approach the ensembles construction under a multi-agent framework. Each individual

Popular ensemble methods: an empirical study

by David Opitz, Richard Maclin - Journal of Artificial Intelligence Research , 1999
"... An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Baggi ..."
Abstract - Cited by 296 (4 self) - Add to MetaCart
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble

Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy.

by Ludmila I Kuncheva , Editor: Robert E Schapire - Machine Learning, , 2003
"... Abstract. Diversity among the members of a team of classifiers is deemed to be a key issue in classifier combination. However, measuring diversity is not straightforward because there is no generally accepted formal definition. We have found and studied ten statistics which can measure diversity am ..."
Abstract - Cited by 238 (0 self) - Add to MetaCart
Abstract. Diversity among the members of a team of classifiers is deemed to be a key issue in classifier combination. However, measuring diversity is not straightforward because there is no generally accepted formal definition. We have found and studied ten statistics which can measure diversity

RESEARCH ARTICLE Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification

by Mohammad Nazmul Haque, Nasimul Noman, Regina Berretta, Pablo Moscato
"... Classification of datasets with imbalanced sample distributions has always been a chal-lenge. In general, a popular approach for enhancing classification performance is the con-struction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent ..."
Abstract - Add to MetaCart
base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each
Next 10 →
Results 1 - 10 of 2,983
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University