Results 1  10
of
16
A Constructive Algorithm for Training Cooperative Neural Network Ensembles
 IEEE Transactions on Neural Networks
, 2003
"... This paper presents a constructive algorithm for training cooperative neuralnetwork ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on bo ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
(Show Context)
This paper presents a constructive algorithm for training cooperative neuralnetwork ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and MackeyGlass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability.
Extraction of Rules from Artificial Neural Networks for Nonlinear Regression
, 2002
"... Neural networks have been successfully applied to solve a variety of application problems including classification and function approximation. They are especially useful as function approximators because they do not require prior knowledge of the input data distribution and they have been shown to b ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
Neural networks have been successfully applied to solve a variety of application problems including classification and function approximation. They are especially useful as function approximators because they do not require prior knowledge of the input data distribution and they have been shown to be universal approximators. In many applications, it is desirable to extract knowledge that can explain how the problems are solved by the networks. Most existing approaches have focused on extracting symbolic rules for classification. Few methods have been devised to extract rules from trained neural networks for regression. This article presents an approach for extracting rules from trained neural networks for regression. Each rule in the extracted rule set corresponds to a subregion of the input space and a linear function involving the relevant input attributes of the data approximates the network output for all data samples in this subregion. Extensive experimental results on 32 benchmark data sets demonstrate the effectiveness of the proposed approach in generating accurate regression rules.
Deterministic Nonmonotone Strategies for Effective Training of Multilayer Perceptrons
"... In this paper, we present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmono ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the previous epochs, and we propose a subprocedure to dynamically compute . The nonmonotone strategy can be incorporated in any batch training algorithm and provides fast, stable, and reliable learning. Experimental results in different classes of problems show that this approach improves the convergence speed and success percentage of firstorder training algorithms and alleviates the need for finetuning problemdepended heuristic parameters.
Bagging and Boosting Negatively Correlated Neural Networks
, 2008
"... In this paper we propose two cooperative ensemble learning algorithms, NegBagg and NegBoost, for designing neural network (NN) ensembles. The proposed algorithms train different individual NNs in an ensemble incrementally using the negative correlation learning algorithm. Bagging and boosting algori ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In this paper we propose two cooperative ensemble learning algorithms, NegBagg and NegBoost, for designing neural network (NN) ensembles. The proposed algorithms train different individual NNs in an ensemble incrementally using the negative correlation learning algorithm. Bagging and boosting algorithms are used in NegBagg and NegBoost, respectively, to create different training sets for different NNs in the ensemble. The idea behind using negative correlation learning in conjunction with bagging/boosting algorithm is to facilitate interaction and cooperation among NNs during their training. Both NegBagg and NegBoost use a constructive approach to determine automatically the number of hidden neurons for NNs. NegBoost also uses the constructive approach to determine automatically the number of NNs for the ensemble. The two algorithms have been tested on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, satellite, soybean and waveform problems. The experimental results show that NegBagg and NegBoost require a small number of training epochs to produce compact NN ensembles with good generalization.
guidance and support helped me in many respects during the completion of the project.
"... I am heartily thankful to my supervisor, Prof. Tom Gedeon, whose encouragement, ..."
Abstract
 Add to MetaCart
(Show Context)
I am heartily thankful to my supervisor, Prof. Tom Gedeon, whose encouragement,
2 Fuzzy Signature Based Radial Basis Neural Network
, 2011
"... Gedeon and his PhD student Dingyun Zhu. I also thank my friends Huajie Wu and Tengfei ..."
Abstract
 Add to MetaCart
(Show Context)
Gedeon and his PhD student Dingyun Zhu. I also thank my friends Huajie Wu and Tengfei
Performances Comparison of Neural Architectures for OnLine Speed Estimation in
"... Abstract—The performance of sensorless controlled induction motor drive depends on the accuracy of the estimated speed. Conventional estimation techniques being mathematically complex require more execution time resulting in poor dynamic response. The nonlinear mapping capability and powerful learn ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—The performance of sensorless controlled induction motor drive depends on the accuracy of the estimated speed. Conventional estimation techniques being mathematically complex require more execution time resulting in poor dynamic response. The nonlinear mapping capability and powerful learning algorithms of neural network provides a promising alternative for online speed estimation. The online speed estimator requires the NN model to be accurate, simpler in design, structurally compact and computationally less complex to ensure faster execution and effective control in real time implementation. This in turn to a large extent depends on the type of Neural Architecture. This paper investigates three types of neural architectures for online speed estimation and their performance is compared in terms of accuracy, structural compactness, computational complexity and execution time. The suitable neural architecture for online speed estimation is identified and the promising results obtained are presented. Keywords—Sensorless IM drives, rotor speed estimators, artificial neural network, feed forward architecture, single neuron cascaded architecture. I.
Abstract—Rotor Flux based Model Reference Adaptive System
"... (RFMRAS) is the most popularly used conventional speed estimation scheme for sensorless IM drives. In this scheme, the voltage model equations are used for the reference model. This encounters major drawbacks at low frequencies/speed which leads to the poor performance of RFMRAS. Replacing the re ..."
Abstract
 Add to MetaCart
(Show Context)
(RFMRAS) is the most popularly used conventional speed estimation scheme for sensorless IM drives. In this scheme, the voltage model equations are used for the reference model. This encounters major drawbacks at low frequencies/speed which leads to the poor performance of RFMRAS. Replacing the reference model using Neural Network (NN) based flux estimator provides an alternate solution and addresses such drawbacks. This paper identifies an NN based flux estimator using Single Neuron Cascaded (SNC) Architecture. The proposed SNCNN model replaces the conventional voltage model in RFMRAS to form a novel MRAS scheme named as SNCNNMRAS. Through simulation the proposed SNCNNMRAS is shown to be promising in terms of all major issues and robustness to parameter variation. The suitability of the proposed SNCNNMRAS based speed estimator and its advantages over RFMRAS for sensorless induction motor drives is comprehensively presented through extensive simulations. Keywords—Sensorless operation, vectorcontrolled IM drives, SNCNNMRAS, single neuron cascaded architecture, RFMRAS, artificial neural network I.
Heuristics for the selection of weights in sequential feedforward neural networks: An experimental study
, 2007
"... ..."
Variations of the twospiral task
, 2007
"... The twospiral task is a wellknown benchmark for binary classification. The data consist of points on two intertwined spirals which cannot be linearly separated. This article reviews how this task and some of its variations have significantly inspired the development of several important methods in ..."
Abstract
 Add to MetaCart
The twospiral task is a wellknown benchmark for binary classification. The data consist of points on two intertwined spirals which cannot be linearly separated. This article reviews how this task and some of its variations have significantly inspired the development of several important methods in the history of artificial neural networks. The twospiral task became popular for several different reasons: 1) It was regarded as extremely challenging; 2) It belonged to a suite of standard benchmark tasks; 3) It had visual appeal and was convenient to use in pilot studies. The article also presents an example which demonstrates how small variations of the twospiral task such as relative rotations of the two spirals can lead to qualitatively different generalisation results.