Results 1 
5 of
5
Realtime learning capability of neural networks
 IEEE Trans. Neural Networks
, 2006
"... Abstract—In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradientdescentbased learning algorithms obviously cannot satisfy the realtime learning needs in many applicat ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Abstract—In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradientdescentbased learning algorithms obviously cannot satisfy the realtime learning needs in many applications, especially for largescale applications and/or when higher generalization performance is required. Based on Huang’s constructive network model, this paper proposes a simple learning algorithm capable of realtime learning which can automatically select appropriate values of neural quantizers and analytically determine the parameters (weights and bias) of the network at one time only. The performance of the proposed algorithm has been systematically investigated on a large batch of benchmark realworld regression and classification problems. The experimental results demonstrate that our algorithm can not only produce good generalization performance but also have realtime learning and prediction capability. Thus, it may provide an alternative approach for the practical applications of neural networks where realtime learning and prediction implementation is required. Index Terms—Backpropagation (BP), extreme learning machine, feedforward networks, generalization performance,NN, realtime learning, realtime prediction. I.
Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning
, 2009
"... One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized singlehiddenlayer feedforward networks ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized singlehiddenlayer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EMELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/growing algorithms with good generalization performance.
Optimized Approximation Algorithm in Neural Networks Without Overfitting
"... Abstract—In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimati ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract—In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimation of the signaltonoiseratio figure (SNRF). Using SNRF, which checks the goodnessoffit in the approximation, overfitting can be automatically detected from the training error only without use of a separate validation set. The algorithm has been applied to problems of optimizing the number of hidden neurons in a multilayer perceptron (MLP) and optimizing the number of learning epochs in MLP’s backpropagation training using both synthetic and benchmark data sets. The OAA algorithm can also be utilized in the optimization of other parameters of NNs. In addition, it can be applied to the problem of function approximation using any kind of basis functions, or to the problem of learning model selection when overfitting needs to be considered. Index Terms—Function approximation, neural network (NN) learning, overfitting.
A Regularized Learning Method for Neural Networks Based on Sensitivity Analysis
"... is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other ..."
Abstract
 Add to MetaCart
is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other learning algorithms. This paper introduces a generalization of the SBLLM by adding a regularization term in the cost function. The theoretical basis for the method is given and its performance is illustrated. 1
Prediction of Commodities in Rationing System Using an Enhanced Regression Neural Network Algorithm
"... Abstract — Predictive analytics is an area of data mining that deals with extracting information from data and using it to predict trends and behaviour patterns. The paper predicts the usage of the food commodities in public distribution system in the coming years using the general regression neural ..."
Abstract
 Add to MetaCart
Abstract — Predictive analytics is an area of data mining that deals with extracting information from data and using it to predict trends and behaviour patterns. The paper predicts the usage of the food commodities in public distribution system in the coming years using the general regression neural network algorithm. The algorithm is enhanced using the NTP algorithm which trains the data as per the requirements of ration system in Tamilnadu. A memorybased network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface. This general regression neural network (GRNN) is a onepass learning algorithm with a highly parallel structure. Even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. Keyword NTP, NNT, PDS2, PGR, AAY I.