Results 1 
6 of
6
When Networks Disagree: Ensemble Methods for Hybrid Neural Networks
, 1993
"... This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argu ..."
Abstract

Cited by 290 (2 self)
 Add to MetaCart
This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It efficiently uses all the networks of a population  none of the networks need be discarded. 2) It efficiently uses all the available data for training without overfitting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid overfitting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in clo...
Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization
, 1993
"... ..."
Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks
 In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence
, 1995
"... The primary goal of inductive learning is to generalize well  that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve ge ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
The primary goal of inductive learning is to generalize well  that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradientdescent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima....
Supervised Competitive Learning: A Technology for Penbased Adaption in Realtime
, 1994
"... ___________________________________ SUPERVISED COMPETITIVE LEARNING: A TECHNOLOGY FOR PENBASED ADAPTATION IN REAL TIME by Thomas H. Fuller, Jr. ___________________________________________________________ ADVISOR: Professor Takayuki Dan Kimura ________________________________________________________ ..."
Abstract
 Add to MetaCart
___________________________________ SUPERVISED COMPETITIVE LEARNING: A TECHNOLOGY FOR PENBASED ADAPTATION IN REAL TIME by Thomas H. Fuller, Jr. ___________________________________________________________ ADVISOR: Professor Takayuki Dan Kimura ___________________________________________________________ December, 1994 Saint Louis, Missouri ___________________________________________________________ The advent of affordable, penbased computers promises wide application in educational and home settings. In such settings, systems will be regularly employed by a few users (children or students), and occasionally by other users (teachers or parents). The systems must adapt to the writing and gestures of regular users but not lose prior recognition ability. Furthermore, this adaptation must occur in real time not to frustrate or confuse the user, and not to interfere with the task at hand. It must also provide a reliable measure of the likelihood of correct recognition. Supervised Competitiv...
RICHARD P. LIPPMANN Neural Network Classifiers for Speech Recognition
"... Neural netsofferanapproachtocomputationthatmimicsbiological nervoussystems. Algorithms based on neural nets have been proposed to address speech recognition tasks which humans perlorm with little apparent effort. In this paper, neural net classifiers are described and compared with conventional clas ..."
Abstract
 Add to MetaCart
Neural netsofferanapproachtocomputationthatmimicsbiological nervoussystems. Algorithms based on neural nets have been proposed to address speech recognition tasks which humans perlorm with little apparent effort. In this paper, neural net classifiers are described and compared with conventional classification algorithms. Perceptron classifiers trained with a new algorithm, called back propagation, were tested and found to perform roughly as well as conventional classifiers on digit and vowel classification tasks. A new net architecture, called a Viterbi net, which recognizes timevarying input patterns, provided an accuracyofbetter than 99 % on a large speech data base. Perceptrons and another neural net, the feature map, were implemented in a very largescale integration (VLSI) device. Neural nets are highly interconnected networksofrelativelysimpleprocessingelements, or nodes, that operate in parallel. They are designedto mimicthefunction ofneurobiologicalnetworks. Recentworkonneuralnetworks raises the possibilityofnew approaches to the
Combining the Predictions of Multiple Classifiers: Learning to Initialize Neural Networks* Using Competitive
"... The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve gene ..."
Abstract
 Add to MetaCart
The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradientdescent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two realworld datasets where combinations of networks initialized with our method generalize better than combinations of networks initialized the traditional way. 1