Results 11  20
of
40
A comprehensive survey on functional link neural networks and an adaptive PSO–BP learning for CFLNN
, 2009
"... Abstract Functional link neural network (FLNN) is a class of higher order neural networks (HONs) and have gained extensive popularity in recent years. FLNN have been successfully used in many applications such as system identification, channel equalization, shortterm electricload forecasting, and ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract Functional link neural network (FLNN) is a class of higher order neural networks (HONs) and have gained extensive popularity in recent years. FLNN have been successfully used in many applications such as system identification, channel equalization, shortterm electricload forecasting, and some of the tasks of data mining. The goals of this paper are to: (1) provide readers who are novice to this area with a basis of understanding FLNN and a comprehensive survey, while offering specialists an updated picture of the depth and breadth of the theory and applications; (2) present a new hybrid learning scheme for Chebyshev functional link neural network (CFLNN); and (3) suggest possible remedies and guidelines for practical applications in data mining. We then validate the proposed learning scheme for CFLNN in classification by an extensive simulation study. Comprehensive performance comparisons with a number of existing methods are presented.
Hybrid Wavelet Model Construction Using Orthogonal Forward Selection with Boosting Search
"... This paper considers sparse regression modeling using a generalized kernel model in which each kernel regressor has its individually tuned center vector and diagonal covariance matrix. An orthogonal least squares forward selection procedure is employed to select the regressors one by one using a gui ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
This paper considers sparse regression modeling using a generalized kernel model in which each kernel regressor has its individually tuned center vector and diagonal covariance matrix. An orthogonal least squares forward selection procedure is employed to select the regressors one by one using a guided random search algorithm. In order to prevent the possible overfitting, a practical method to select termination threshold is used. A novel hybrid wavelet is constructed to make the model sparser. The experimental results show that this generalized model outperforms traditional methods in terms of precision and sparseness. And the models with wavelet and hybrid kernel have a much faster convergence rate as compared to that with conventional RBF kernel. 1.
A New RBF Neural Network With Boundary Value Constraints
"... Abstract—We present a novel topology of the radial basis function (RBF) neural network, referred to as the boundary value constraints (BVC)RBF, which is able to automatically satisfy a set of BVC. Unlike most existing neural networks whereby the model is identified via learning from observational d ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract—We present a novel topology of the radial basis function (RBF) neural network, referred to as the boundary value constraints (BVC)RBF, which is able to automatically satisfy a set of BVC. Unlike most existing neural networks whereby the model is identified via learning from observational data only, the proposed BVCRBF offers a generic framework by taking into account both the deterministic prior knowledge and the stochastic data in an intelligent manner. Like a conventional RBF, the proposed BVCRBF has a linearintheparameter structure, such that it is advantageous that many of the existing algorithms for linearintheparameters models are directly applicable. The BVC satisfaction properties of the proposed BVCRBF are discussed. Finally, numerical examples based on the combined Doptimalitybased orthogonal least squares algorithm are utilized to illustrate the performance of the proposed BVCRBF for completeness. Index Terms—Boundary value constraints (BVC), Doptimality,
Robust neurofuzzy rule base knowledge extraction and estimation using subspace decomposition combined with regularization and Doptimality
 IEEE Trans. Systems, Man and Cybernetics, Part B
, 2004
"... Abstract—A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi–Sugeno (T–S) inference mechanism a one to one mapping between a fuzzy rule ba ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract—A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi–Sugeno (T–S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram–Schmidt (G–S) method has been introduced via two effective and complementary approaches of regularization and Doptimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a Doptimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the Doptimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm. Index Terms—Neurofuzzy networks, optimal experimental design, orthogonal decomposition, regularization, subspace. I.
Logistic Regression by Means of Evolutionary Radial Basis Function Neural Networks
"... Abstract — This paper proposes a hybrid multilogistic methodology, named logistic regression using initial and radial basis function (RBF) covariates. The process for obtaining the coefficients is carried out in three steps. First, an evolutionary programming (EP) algorithm is applied, in order to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper proposes a hybrid multilogistic methodology, named logistic regression using initial and radial basis function (RBF) covariates. The process for obtaining the coefficients is carried out in three steps. First, an evolutionary programming (EP) algorithm is applied, in order to produce an RBF neural network (RBFNN) with a reduced number of RBF transformations and the simplest structure possible. Then, the initial attribute space (or, as commonly known as in logistic regression literature, the covariate space) is transformed by adding the nonlinear transformations of the input variables given by the RBFs of the best individual in the final generation. Finally, a maximum likelihood optimization method determines the coefficients associated with a multilogistic regression model built in this augmented covariate space. In this final step, two different multilogistic regression algorithms are applied: one considers all initial and RBF covariates (multilogistic initialRBF regression) and the other one incrementally constructs the model and applies cross validation, resulting in an automatic covariate selection [simplelogistic initialRBF regression (SLIRBF)]. Both methods include a regularization parameter, which has been also optimized. The methodology proposed is tested using 18 benchmark classification problems from wellknown machine learning problems and two real agronomical problems. The results are compared with the corresponding multilogistic regression methods applied to the initial covariate space, to the RBFNNs obtained by the EP algorithm, and to other probabilistic classifiers, including different RBFNN design methods [e.g., relaxed variable kernel density estimation, support vector machines, a sparse classifier (sparse multinomial logistic regression)] and a procedure similar to SLIRBF but using product unit basis functions. The SLIRBF models are found to be competitive when compared with the corresponding multilogistic regression methods and the RBFEP method. A measure of statistical significance is used, which indicates that SLIRBF reaches the state of the art.
Sparse incremental regression modeling using correlation criterion with boosting search
 IEEE Signal Processing Letters
, 2005
"... Abstract—A novel technique is presented to construct sparse generalized Gaussian kernel regression models. The proposed method appends regressors in an incremental modeling by tuning the mean vector and diagonal covariance matrix of an individual Gaussian regressor to best fit the training data, bas ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—A novel technique is presented to construct sparse generalized Gaussian kernel regression models. The proposed method appends regressors in an incremental modeling by tuning the mean vector and diagonal covariance matrix of an individual Gaussian regressor to best fit the training data, based on a correlation criterion. It is shown that this is identical to incrementally minimizing the modeling mean square error (MSE). The optimization at each regression stage is carried out with a simple search algorithm reenforced by boosting. Experimental results obtained using this technique demonstrate that it offers a viable alternative to the existing stateoftheart kernel modeling methods for constructing parsimonious models. Index Terms—Boosting, correlation, Gaussian kernel model, incremental modeling, regression.
Elasticnet prefiltering for twoclass classification
 IEEE Trans. Cybern. 43 (February
, 2013
"... Abstract—A twostage linearintheparameter model construction algorithm is proposed aimed at noisy twoclass classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linearinthep ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—A twostage linearintheparameter model construction algorithm is proposed aimed at noisy twoclass classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linearintheparameter classifier. The prefiltering stage is a twolevel process aimed at maximizing a model’s generalization capability, in which a new elasticnet model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particleswarmoptimization algorithm at the upper level by minimizing the leaveoneout (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the Doptimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems. Index Terms—Crossvalidation (CV), elastic net (EN), forward regression, leaveoneout (LOO) errors, linearintheparameter model, regularization. I.
Backward elimination methods for associative memory network pruning
 Int. J. Hybrid Intelligent Systems
, 2004
"... Abstract. Three hybrid data based model construction/pruning formula are introduced by using backward elimination as automatic postprocessing approaches to improved model sparsity. Each of these approaches is based on a composite cost function between the model fit and one of three terms of A/Dopt ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Three hybrid data based model construction/pruning formula are introduced by using backward elimination as automatic postprocessing approaches to improved model sparsity. Each of these approaches is based on a composite cost function between the model fit and one of three terms of A/Doptimality / (parameter 1norm in basis pursuit) that determines a pruning process. The A/Doptimality based pruning formula contain some orthogonalisation between the pruned model and the deleted regressor. The basis pursuit cost function is derived as a simple formula without need for an orthogonalisation process. These different approaches to parsimonious data based modelling are applied to the same numerical examples in parallel to demonstrate their computational effectiveness.
Symmetric RBF Classifier for Nonlinear Detection in MultipleAntennaAided Systems
, 2008
"... In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the socalled “overloaded ” multipleantennaaided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the socalled “overloaded ” multipleantennaaided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient. The proposed solution is capable of providing a signaltonoise ratio (SNR) gain in excess of 8 dB against the powerful linear minimum bit error rate (BER) benchmark, when supporting four users with the aid of two receive antennas or seven users with four receive antenna elements.
Original Citation
"... Simon, D. (2002). Training radial basis neural networks with the extended Kalman filter. Neurocomputing, 48, 14. Training radial basis neural networks with the extended ..."
Abstract
 Add to MetaCart
(Show Context)
Simon, D. (2002). Training radial basis neural networks with the extended Kalman filter. Neurocomputing, 48, 14. Training radial basis neural networks with the extended