Results 1  10
of
81
Error Correlation And Error Reduction In Ensemble Classifiers
, 1996
"... Using an ensemble of classifiers, instead of a single classifier, can lead to improved generalization. The gains obtained by combining however, are often affected more by the selection of what is presented to the combiner, than by the actual combining method that is chosen. In this paper we focus ..."
Abstract

Cited by 164 (22 self)
 Add to MetaCart
(Show Context)
Using an ensemble of classifiers, instead of a single classifier, can lead to improved generalization. The gains obtained by combining however, are often affected more by the selection of what is presented to the combiner, than by the actual combining method that is chosen. In this paper we focus on data selection and classifier training methods, in order to "prepare" classifiers for combining. We review a combining framework for classification problems that quantifies the need for reducing the correlation among individual classifiers. Then, we discuss several methods that make the classifiers in an ensemble more complementary. Experimental results are provided to illustrate the benefits and pitfalls of reducing the correlation among classifiers, especially when the training data is in limited supply. 2 1 Introduction A classifier's ability to meaningfully respond to novel patterns, or generalize, is perhaps its most important property (Levin et al., 1990; Wolpert, 1990). In...
Constructive Algorithms for Structure Learning in Feedforward Neural Networks for Regression Problems
 IEEE Transactions on Neural Networks
, 1997
"... In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole ..."
Abstract

Cited by 74 (2 self)
 Add to MetaCart
(Show Context)
In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole problem as a state space search, we first describe the general issues in constructive algorithms, with special emphasis on the search strategy. A taxonomy, based on the differences in the state transition mapping, the training algorithm and the network architecture, is then presented. Keywords Constructive algorithm, structure learning, state space search, dynamic node creation, projection pursuit regression, cascadecorrelation, resourceallocating network, group method of data handling. I. Introduction A. Problems with Fixed Size Networks I N recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among...
Catastrophic forgetting, rehearsal and pseudorehearsal
 Connection Science
, 1995
"... rehearsal ..."
Structural adaptation and generalization in supervised feedforward networks, d
 Artif. Neural Networks
, 1994
"... This work explores diverse techniques for improving the generalization ability of supervised feedforward neural networks via structural adaptation, and introduces a new network structure with sparse connectivity. Pruning methods which start from a large network and proceed in trimming it until a sa ..."
Abstract

Cited by 33 (22 self)
 Add to MetaCart
(Show Context)
This work explores diverse techniques for improving the generalization ability of supervised feedforward neural networks via structural adaptation, and introduces a new network structure with sparse connectivity. Pruning methods which start from a large network and proceed in trimming it until a satisfactory solution is reached, are studied first. Then, construction methods, which build a network from a simple initial configuration, are presented. A survey of related results from the disciplines of function approximation theory, nonparametric statistical inference and estimation theory leads to methods for principled architecture selection and estimation of prediction error. A network based on sparse connectivity is proposed as an alternative approach to adaptive networks. The generalization ability of this network is improved by partly decoupling the outputs. We perform numerical simulations and provide comparative results for both classification and regression problems to show the generalization abilities of the sparse network. 1
Estimating the Scene Illumination Chromaticity by Using a Neural Network
 JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A
, 2002
"... ..."
Data Visualization and Feature Selection: New Algorithms for Nongaussian Data
 in Advances in Neural Information Processing Systems
, 1999
"... Data visualization and feature selection methods are proposed based on the joint mutual information and ICA. The visualization methods can find many good 2D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection m ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Data visualization and feature selection methods are proposed based on the joint mutual information and ICA. The visualization methods can find many good 2D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2D viewing coordinates for data visualization and to select inputs for a neural network classifier. Keywords: feature selection, joint mutual information, ICA, visualization, classification. 1 INTRODUCTION Visualization of input data and feature selection are intimately related. A good feature selection algorithm can identify meaningful coordinate projections for low dimensional data visualization. Conversely, a good visualization technique can suggest meaningful features to include ...
ConceptLearning In The Absence Of CounterExamples: An AutoassociationBased Approach To Classification
, 1999
"... The overwhelming majority of research currently pursued within the framework of conceptlearning concentrates on discriminationbased learning, an inductive learning paradigm that relies on both examples and counterexamples of the concept. This emphasis, however, can present a practical problem: th ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
The overwhelming majority of research currently pursued within the framework of conceptlearning concentrates on discriminationbased learning, an inductive learning paradigm that relies on both examples and counterexamples of the concept. This emphasis, however, can present a practical problem: there are realworld engineering problems for which counterexamples are both scarce and difficult to gather. For these problems, recognitionbased learning systems are much more appropriate because they do not use counterexamples in the conceptlearning phase. The purpose of this dissertation is to analyze a connectionist recognitionbased learning systemautoassociationbased classificationand answer the following questions: ffl What features of the autoassociator make it ca...
Constructive Feedforward Neural Networks for Regression Problems: A Survey
, 1995
"... In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard backpropagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additiona ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard backpropagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additional hidden units and weights until a satisfactory solution is found. The constructive procedures are categorized according to the resultant network architecture and the learning algorithm for the network weights. The Hong Kong University of Science & Technology Technical Report Series Department of Computer Science 1 Introduction In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among them, the class of multilayer feedforward networks is perhaps the most popular. Standard backpropagation performs gradient descent only in the weight space of a network with fixed topology; this approach is analogous to ...
Linear Unlearning for CrossValidation
 Advances in Computational Mathematics
, 1996
"... The leaveoneout crossvalidation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative crossvalidation. Further, we discuss the possibil ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
The leaveoneout crossvalidation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative crossvalidation. Further, we discuss the possibility of exploiting the ensemble of networks o ered by leaveoneout for performing ensemble predictions. We show that the generalization performance of the equally weighted ensemble predictor is identical to that of the network trained on the whole training set. Numerical experiments on the sunspot time series prediction benchmark demonstrates the potential of the linear unlearning technique. 1
Predicting the Stock Market
, 1998
"... This paper presents a tuturial introduction to predictions of stock time series. The various approaches of technical and fundamental analysis is presented and the prediction problem is formulated as a special case of inductive learning. The problems with performance evaluation of nearrandomwalk pr ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a tuturial introduction to predictions of stock time series. The various approaches of technical and fundamental analysis is presented and the prediction problem is formulated as a special case of inductive learning. The problems with performance evaluation of nearrandomwalk processes are illustrated with examples together with guidelines for avoiding the risk of datasnooping. The connections to concepts like "the biasvariance dilemma", overtraining and model complexity are further covered. Existing benchmarks and testing metrics are surveyed and some new measures are introduced.