Results 1  10
of
14
Constructive Incremental Learning from Only Local Information
, 1998
"... ... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. ..."
Abstract

Cited by 160 (37 self)
 Add to MetaCart
... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.
Receptive Field Weighted Regression
, 1997
"... We introduce a constructive, incremental learning system for regression problems that models data by means of spatially localized linear models. In contrast to other approaches, the size and shape of the receptive field of each locally linear model as well as the parameters of the locally linear mod ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We introduce a constructive, incremental learning system for regression problems that models data by means of spatially localized linear models. In contrast to other approaches, the size and shape of the receptive field of each locally linear model as well as the parameters of the locally linear model itself are learned independently, i.e., without the need for competition or any other kind of communication. This characteristic is accomplished by incrementally minimizing a weighted penalized local cross validation error. As a result, we obtain a learning system that can allocate resources as needed while dealing with the biasvariance dilemma in a principled way. The spatial localization of the linear models increases robustness towards negative interference. Our learning system can be interpreted as a nonparametric adaptive bandwidth smoother, as a mixture of experts where the experts are trained in isolation, and as a learning system which profits from combining independent expert knowledge on the same problem. It illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.
Neural Networks in the Context of Autonomous Agents: Important Concepts Revisited
 Proc. of the Art. Neural Networks in Engin. Conf
, 1996
"... : Artificial neural networks have been successfully used in many technical applications. They are also important tools for the control of autonomous agents. The major goal of research on autonomous agents is to study intelligence as the result of a system environment interaction, rather than underst ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
: Artificial neural networks have been successfully used in many technical applications. They are also important tools for the control of autonomous agents. The major goal of research on autonomous agents is to study intelligence as the result of a system environment interaction, rather than understanding intelligence on a computational level. In contrast to other applications, autonomous agents might not distinguish between a learning and a performance phase; they have to continuously learn while they are behaving in their environment. Thus, a neural network for autonomous agents should feature incremental learning, should not exhibit overlearning and should not suffer from a high VC dimension. The review presented in this paper reveals that most existing models are not ideally suited for autonomous agents. The main goals of this paper are (1) to discuss the autonomous agents perspective, (2) to identify important properties of neural networks for autonomous agents, and (3), very impo...
Sensitivity Analysis for Selective Learning by Feedforward Neural Networks
, 2001
"... Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in this paper where the neural network dynamically selects training patterns from a candidate training set during training, using the network's current attained knowledge about the target concept. Sensitivity analysis of the neural network output with respect to small input perturbations is used to quantify the informativeness of candidate patterns. Only the most informative patterns, which are those patterns closest to decision boundaries, are selected for training. Experimental results show a significant reduction in the training set size, without negatively influencing generalization performance and convergence characteristics. This approach to selective learning is then compared to an alternative where informativeness is measured as the magnitude in prediction error.
1996], A constructive learning algorithm for an HME
 In Proceedings of the IEEE International Conference on Neural Networks
, 1996
"... A Hierarchical Mixtures of Experts (HME) model has been applied to several classes of problems, and its usefulness has been shown. However, defining an adequate structure in advance is required and the resulting performance depends on the structure. To overcome this problem, a constructive learning ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A Hierarchical Mixtures of Experts (HME) model has been applied to several classes of problems, and its usefulness has been shown. However, defining an adequate structure in advance is required and the resulting performance depends on the structure. To overcome this problem, a constructive learning algorithm for an HME is proposed; it includes an initialization method, a training method and an extension method. In our experiments, which used parity problems and a function approximation problem, the proposed algorithm worked much better than the conventional method. 1.
Quantization and Pruning of Multilayer Perceptrons: Towards Compact Neural Networks
, 1997
"... A connectionist system or neural network is a massively parallel network of weighted interconnections, which connect one or more layers of nonlinear processing elements (neurons). To fully profit from the inherent parallel processing of these networks, development of parallel hardware implementatio ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A connectionist system or neural network is a massively parallel network of weighted interconnections, which connect one or more layers of nonlinear processing elements (neurons). To fully profit from the inherent parallel processing of these networks, development of parallel hardware implementations is essential. However, these hardware implementations often differ in various ways from the ideal mathematical description of a neural network model. It is, for example, required to have quantized network parameters, in both electronic and optical implementations of neural networks. This can be because device operation is quantized or a coarse quantization of network parameters is beneficial for designing compact networks. Most of the standard algorithms for training neural networks are not suitable for quantized networks because they are based on gradient descent and require a high accuracy of the network parameters Several weight discretization techniques have been developed to reduce the required accuracy further without deterioration of network performance. One of the earliest of these techniques [Fiesler88] is further investigated and improved in this report. Another way to obtain compact networks is by minimizing their topology for the problem at hand. However, it is impossible to know a priori the size of such minimal network topology. An unsuitable
Feedforward Neural Network Design with Tridiagonal Symmetry Constraints
, 1999
"... This paper introduces a pruning algorithm with tridiagonal symmetry constraints for feedforward neural network design. The algorithm uses a reflection transform applied to the inputhidden weight matrix in order to reduce it to its tridiagonal form. The designed FANN structures obtained by apply ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper introduces a pruning algorithm with tridiagonal symmetry constraints for feedforward neural network design. The algorithm uses a reflection transform applied to the inputhidden weight matrix in order to reduce it to its tridiagonal form. The designed FANN structures obtained by applying the proposed algorithm are compact and symmetrical. Therefore, they are well suited for efficient hardware and software implementations. Moreover, the number of the FANN parameters is reduced without a significant loss in performance. We illustrate the complexity and performance of the proposed algorithm by applying it as a solution to a nonlinear regression problem. We also compare the results of our proposed algorithm with those of the Optimal Brain Damage algorithm. EDICS: SP 6.1.5 This work was supported by the Natural Sciences and Engineering Research Council of Canada under contract #06P0187668. 1 Introduction Feedforward neural network (FANN) design has lately attracted...
A Neural Networks Construction Method based on Boolean Logic
 IEEE International Conference on Tools with Arti Intelligence
, 1996
"... A neural network construction method for problems specified for data sets with in and/or output values in the continuous or discrete domain is described and evaluated. This approach is based on a Boolean approximation of the data set and is generic for various neural network architectures. The cons ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A neural network construction method for problems specified for data sets with in and/or output values in the continuous or discrete domain is described and evaluated. This approach is based on a Boolean approximation of the data set and is generic for various neural network architectures. The construction method takes advantage of a construction method for Boolean problems without increasing the dimensions of the in or output vectors, which is a strong advantage over approaches which work on a binarized version of the data set with an increased number of in and output elements. Further, the networks are pruned in a second phase in order to obtain very small networks. Keywords: construction of networks, pruning, generalization, optimality criteria, high order perceptrons, backpropagation neural networks. 1 Introduction A major problem in applying of neural networks is the choice of a (minimal) topology [5]: a considerable amount of architectures and methods for the construction o...
A Comparative Study of the CascadeCorrelation Architecture in Pattern Recognition Applications
"... In this work, an experimental evaluation of the cascadecorrelation architecture is carried out in different benchmarking pattern recognition problems. An extensive experimental framework is developed to establish a comparison between the cascadecorrelation network (CASCOR) and the more traditional ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this work, an experimental evaluation of the cascadecorrelation architecture is carried out in different benchmarking pattern recognition problems. An extensive experimental framework is developed to establish a comparison between the cascadecorrelation network (CASCOR) and the more traditional multilayer perceptron (MLP) and radial basis function models (RBF). The different network configurations are evaluated with respect to generalization performance in three practical realworld tasks: the diagnosis of coronary diseases (heart), the credit screening problem (card) and the recognition of handwritten characters. It is also considered the issue of catastrophic forgetting in MLP and CASCOR models. In addition to some clear potential advantages observed in the cascadecorrelation network such as the onlearning definition of the number of hidden units, the practical satisfactory results obtained suggest that the CASCOR model may represent in some situations an alternative to other t...
Improving the Performance of Piecewise Linear Separation Incremental Algorithms for Practical Hardware Implementations
"... Abstract. In this paper we shall review the common problems associated with Piecewise Linear Separation incremental algorithms. This kind of neural models yield poor performances when dealing with some classification problems, due to the evolving schemes used to construct the resulting networks. So ..."
Abstract
 Add to MetaCart
Abstract. In this paper we shall review the common problems associated with Piecewise Linear Separation incremental algorithms. This kind of neural models yield poor performances when dealing with some classification problems, due to the evolving schemes used to construct the resulting networks. So as to avoid this undesirable behavior we shall propose a modification criterion. It is based upon the definition of a function which will provide information about the quality of the network growth process during the learning phase. This function is evaluated periodically as the network structure evolves, and will permit, as we shall show through exhaustive benchmarks, to considerably improve the performance (measured in terms of network complexity and generalization capabilities) offered by the networks generated by this incremental models. 1.