Results 1 
9 of
9
Realtime learning capability of neural networks
 IEEE Trans. Neural Networks
, 2006
"... Abstract—In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradientdescentbased learning algorithms obviously cannot satisfy the realtime learning needs in many applicat ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
(Show Context)
Abstract—In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradientdescentbased learning algorithms obviously cannot satisfy the realtime learning needs in many applications, especially for largescale applications and/or when higher generalization performance is required. Based on Huang’s constructive network model, this paper proposes a simple learning algorithm capable of realtime learning which can automatically select appropriate values of neural quantizers and analytically determine the parameters (weights and bias) of the network at one time only. The performance of the proposed algorithm has been systematically investigated on a large batch of benchmark realworld regression and classification problems. The experimental results demonstrate that our algorithm can not only produce good generalization performance but also have realtime learning and prediction capability. Thus, it may provide an alternative approach for the practical applications of neural networks where realtime learning and prediction implementation is required. Index Terms—Backpropagation (BP), extreme learning machine, feedforward networks, generalization performance,NN, realtime learning, realtime prediction. I.
Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning
, 2009
"... One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized singlehiddenlayer feedforward networks ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized singlehiddenlayer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EMELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/growing algorithms with good generalization performance.
Optimized Approximation Algorithm in Neural Networks without overfitting
 IEEE Trans. on Neural Networks
, 2008
"... ..."
(Show Context)
INITIALS
, 1982
"... Aii nutech ENGINEERS I REVISION CONTROL SHEET (Concluded) TITLE: Monticello Nuclear Generating Plant ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Aii nutech ENGINEERS I REVISION CONTROL SHEET (Concluded) TITLE: Monticello Nuclear Generating Plant
A Regularized Learning Method for Neural Networks Based on Sensitivity Analysis
"... is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other ..."
Abstract
 Add to MetaCart
(Show Context)
is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other learning algorithms. This paper introduces a generalization of the SBLLM by adding a regularization term in the cost function. The theoretical basis for the method is given and its performance is illustrated. 1
Prediction of Commodities in Rationing System Using an Enhanced Regression Neural Network Algorithm
"... Abstract — Predictive analytics is an area of data mining that deals with extracting information from data and using it to predict trends and behaviour patterns. The paper predicts the usage of the food commodities in public distribution system in the coming years using the general regression neural ..."
Abstract
 Add to MetaCart
Abstract — Predictive analytics is an area of data mining that deals with extracting information from data and using it to predict trends and behaviour patterns. The paper predicts the usage of the food commodities in public distribution system in the coming years using the general regression neural network algorithm. The algorithm is enhanced using the NTP algorithm which trains the data as per the requirements of ration system in Tamilnadu. A memorybased network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface. This general regression neural network (GRNN) is a onepass learning algorithm with a highly parallel structure. Even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. Keyword NTP, NNT, PDS2, PGR, AAY I.
OneClassataTime Removal Sequence Planning Method for Multiclass Classification Problems
"... Abstract—Using dynamic programming, this work develops a oneclassatatime removal sequence planning method to decompose a multiclass classification problem into a series of twoclass problems. Compared with previous decomposition methods, the approach has the following distinct features. First, ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Using dynamic programming, this work develops a oneclassatatime removal sequence planning method to decompose a multiclass classification problem into a series of twoclass problems. Compared with previous decomposition methods, the approach has the following distinct features. First, under the oneclassatatime framework, the approach guarantees the optimality of the decomposition. Second, for aclass problem, the number of binary classifiers required by the method is only 1. Third, to achieve higher classification accuracy, the approach can easily be adapted to form a committee machine. A drawback of the approach is that its computational burden increases rapidly with the number of classes. To resolve this difficulty, a partial decomposition technique is introduced that reduces the computational cost by generating a suboptimal solution. Experimental results demonstrate that the proposed approach consistently outperforms two conventional decomposition methods. Index Terms—Dynamic programming, multiclass classification, pattern recognition. I.