Results 1  10
of
46
ANFIS: AdaptiveNetworkBased Fuzzy Inference System
 IEEE Transactions on Systems, Man and Cybernetics
, 1993
"... ..."
Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization
, 1993
"... ..."
Ensemble Learning using Decorrelated Neural Networks
 Connection Science
, 1996
"... We describe a decorrelation network training method for improving the quality of regression learning in "ensemble " neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reprodu ..."
Abstract

Cited by 68 (0 self)
 Add to MetaCart
We describe a decorrelation network training method for improving the quality of regression learning in "ensemble " neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the "3 Parity" logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensembl...
New tools in nonlinear modelling and prediction
 Comput. Manag. Sci
, 2004
"... 1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7 ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7
Statistical Control of RBFlike Networks for Classification
 In 7th International Conference on Artificial Neural Networks
, 1997
"... . Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. In ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
. Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. IncNet Pro is similar to the Resource Allocation Network described by Platt in the main idea of the expanding the network. The statistical novel criterion is used to determine the growing point. The Biradial functions are used instead of radial basis functions to obtain more flexible network. 1 Introduction The Radial Basis Function (RBF) networks [13,12] were designed as a solution to an approximation problem in multidimensional spaces. The typical form of the RBF network can be written as f(x; w;p) = M X i=1 w i G i (jjxjj i ; p i ) (1) where M is the number of the neurons in hidden layer, G i (jjxjj i ; p i ) is the i th Radial Basis Function, p i are adjustable parameters such as...
Global Optimization for Artificial Neural Networks: A Tabu Search Application
"... The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for Neural Network research in Business. The attractiveness of neural network research stems from researchers' need to approximate models within the busine ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for Neural Network research in Business. The attractiveness of neural network research stems from researchers' need to approximate models within the business environment without having a priori knowledge about the true underlying function. Gradient techniques, such as backpropagation, are currently the most widely used methods for neural network optimization. Since these techniques search for local solutions, a global search algorithm is warranted. In this paper we examine a recently popularized optimization technique, Tabu Search, as a possible alternative to the problematic backpropagation. A Monte Carlo study was conducted to test the appropriateness of this global search technique for optimizing neural networks. Holding the neural network architecture constant, 530 independent runs were conducted for each of seven test functions, including a pr...
A Smoothing Regularizer for Feedforward and Recurrent Neural Networks
, 1996
"... We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent conn ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent connections described by Y (t) = f \Gamma WY (t \Gamma ø) + V X(t) \Delta ; Z(t) = UY (t) ; the training criterion with the regularizer is D = 1 N N X t=1 jjZ(t) \Gamma Z (\Phi; I(t))jj 2 + ae ø 2 (\Phi) ; where \Phi = fU; V; Wg is the network parameter set, Z(t) are the targets, I(t) = fX(s); s = 1; 2; \Delta \Delta \Delta ; tg represents the current and all historical input information, N is the size of the training data set, ae ø 2 (\Phi) is the regularizer, and is a regularization parameter. The closedform expression for the regularizer for timelagged recurrent networks is: ae ø (\Phi) = fljjU jjjjV jj 1 \Gamma fljjW jj h 1 \Gamma e fljjW jj\Gamma1 ø i ; ...
Smoothing Regularizers for Projective Basis Function Networks
, 1996
"... Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widelyused sigmoidal PBFs, have heretofore th been proposed. We derive new classes of algebraicallysimple morder smoothing reg ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widelyused sigmoidal PBFs, have heretofore th been proposed. We derive new classes of algebraicallysimple morder smoothing regularizers for networks N T of projective basis functions f(W, :r) = 5: big [,c v 5 + v/0] + u0, with general transfer functions g[.]. These regularizers are: RG(m,m) = y}u}ll,Jll 2m GlobalForm RL(m,m) = y}u}ll,Jll 2m LocalForm With appropriate constant factors, these regularizers bound the corresponding mt*order smoothing integral Of(W,a: ) 2 In the above expressions, {v j} are the projection vectors, W denotes all the network weights {u j, u0, v j, v0}, and (x) is a weighting function (not necessarily the input density) on the Ddimensional input space. The global and local cases are distinguished by different choices of (x).
Towards LongTerm Prediction
, 2000
"... This paper describes a simple method of obtaining longerterm predictions from a nonlinear timeseries, assuming one already has a reasonably good shortterm predictor. The usefulness of the technique is that it eliminates, to some extent, the systematic errors of the iterated shortterm predictor. ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This paper describes a simple method of obtaining longerterm predictions from a nonlinear timeseries, assuming one already has a reasonably good shortterm predictor. The usefulness of the technique is that it eliminates, to some extent, the systematic errors of the iterated shortterm predictor. The technique we describe also provides an indication of the prediction horizon. We consider systems with both observational and dynamic noise and analyse a number of artificial and experimental systems obtaining consistent results. We also compare this method of longerterm prediction with ensemble prediction.
Optimal Learning in Artificial Neural Networks: A Theoretical View
 Neurocomput
"... The effectiveness of connectionist models in emulating intelligent behaviour and solving significant practical problems is strictly related to the capability of the learning algorithms to find optimal or nearoptimal solutions and to generalize to new examples. This paper reviews some theoretical co ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The effectiveness of connectionist models in emulating intelligent behaviour and solving significant practical problems is strictly related to the capability of the learning algorithms to find optimal or nearoptimal solutions and to generalize to new examples. This paper reviews some theoretical contributions to optimal learning in the attempt to provide a unified view and give the state of the art in the field. The focus of the review is on the problem of local minima in the cost function that is likely to affect more or less any learning algorithm. Starting from this analysis, we briefly review proposals for discovering optimal solutions and suggest conditions for designing architectures tailored to a given task. 1 Introduction In the last few years impressive efforts have been made in using connectionist models either for modelling human behaviour and for solving practical problems. In the field of cognitive science and psychology, we have been witnessing a debate on the actual ro...