Results 1  10
of
34
ANFIS: AdaptiveNetworkBased Fuzzy Inference System
, 1993
"... This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping bas ..."
Abstract

Cited by 434 (5 self)
 Add to MetaCart
This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping based on both human knowledge (in the form of fuzzy ifthen rules) and stipulated inputoutput data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components onlinely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. 1 Introduction System modeling based on conventional mathematical tools (e.g., differential equations) is not well suited for dealing with illdefine...
Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization
, 1993
"... ..."
Ensemble Learning using Decorrelated Neural Networks
 Connection Science
, 1996
"... We describe a decorrelation network training method for improving the quality of regression learning in "ensemble " neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desir ..."
Abstract

Cited by 68 (0 self)
 Add to MetaCart
We describe a decorrelation network training method for improving the quality of regression learning in "ensemble " neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the "3 Parity" logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensembl...
New tools in nonlinear modelling and prediction
 Comput. Manag. Sci
, 2004
"... 1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7 ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7
Statistical Control of RBFlike Networks for Classification
 In 7th International Conference on Artificial Neural Networks
, 1997
"... . Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. In ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
. Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. IncNet Pro is similar to the Resource Allocation Network described by Platt in the main idea of the expanding the network. The statistical novel criterion is used to determine the growing point. The Biradial functions are used instead of radial basis functions to obtain more flexible network. 1 Introduction The Radial Basis Function (RBF) networks [13,12] were designed as a solution to an approximation problem in multidimensional spaces. The typical form of the RBF network can be written as f(x; w;p) = M X i=1 w i G i (jjxjj i ; p i ) (1) where M is the number of the neurons in hidden layer, G i (jjxjj i ; p i ) is the i th Radial Basis Function, p i are adjustable parameters such as...
A Smoothing Regularizer for Feedforward and Recurrent Neural Networks
, 1996
"... We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent conn ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent connections described by Y (t) = f \Gamma WY (t \Gamma ø) + V X(t) \Delta ; Z(t) = UY (t) ; the training criterion with the regularizer is D = 1 N N X t=1 jjZ(t) \Gamma Z (\Phi; I(t))jj 2 + ae ø 2 (\Phi) ; where \Phi = fU; V; Wg is the network parameter set, Z(t) are the targets, I(t) = fX(s); s = 1; 2; \Delta \Delta \Delta ; tg represents the current and all historical input information, N is the size of the training data set, ae ø 2 (\Phi) is the regularizer, and is a regularization parameter. The closedform expression for the regularizer for timelagged recurrent networks is: ae ø (\Phi) = fljjU jjjjV jj 1 \Gamma fljjW jj h 1 \Gamma e fljjW jj\Gamma1 ø i ; ...
Global Optimization for Artificial Neural Networks: A Tabu Search Application
"... The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for Neural Network research in Business. The attractiveness of neural network research stems from researchers' need to approximate models within the business en ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for Neural Network research in Business. The attractiveness of neural network research stems from researchers' need to approximate models within the business environment without having a priori knowledge about the true underlying function. Gradient techniques, such as backpropagation, are currently the most widely used methods for neural network optimization. Since these techniques search for local solutions, a global search algorithm is warranted. In this paper we examine a recently popularized optimization technique, Tabu Search, as a possible alternative to the problematic backpropagation. A Monte Carlo study was conducted to test the appropriateness of this global search technique for optimizing neural networks. Holding the neural network architecture constant, 530 independent runs were conducted for each of seven test functions, including a pr...
Smoothing Regularizers for Projective Basis Function Networks
, 1996
"... Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widelyused sigmoidal PBFs, have heretofore th been proposed. We derive new classes of algebraicallysimple morder smoothing reg ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widelyused sigmoidal PBFs, have heretofore th been proposed. We derive new classes of algebraicallysimple morder smoothing regularizers for networks N T of projective basis functions f(W, :r) = 5: big [,c v 5 + v/0] + u0, with general transfer functions g[.]. These regularizers are: RG(m,m) = y}u}ll,Jll 2m GlobalForm RL(m,m) = y}u}ll,Jll 2m LocalForm With appropriate constant factors, these regularizers bound the corresponding mt*order smoothing integral Of(W,a: ) 2 In the above expressions, {v j} are the projection vectors, W denotes all the network weights {u j, u0, v j, v0}, and (x) is a weighting function (not necessarily the input density) on the Ddimensional input space. The global and local cases are distinguished by different choices of (x).
Towards LongTerm Prediction
, 2000
"... This paper describes a simple method of obtaining longerterm predictions from a nonlinear timeseries, assuming one already has a reasonably good shortterm predictor. The usefulness of the technique is that it eliminates, to some extent, the systematic errors of the iterated shortterm predictor. ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This paper describes a simple method of obtaining longerterm predictions from a nonlinear timeseries, assuming one already has a reasonably good shortterm predictor. The usefulness of the technique is that it eliminates, to some extent, the systematic errors of the iterated shortterm predictor. The technique we describe also provides an indication of the prediction horizon. We consider systems with both observational and dynamic noise and analyse a number of artificial and experimental systems obtaining consistent results. We also compare this method of longerterm prediction with ensemble prediction.
Evolving Predictors for Chaotic Time Series
 in Proc. SPIE: Application and Science of Computational Intelligence
, 1998
"... Neural networks are a popular representation for inducing singlestep predictors for chaotic times series. For complex time series it is often the case that a large number of hidden units must be used to reliably acquire appropriate predictors. This paper describes an evolutionary method that evolve ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Neural networks are a popular representation for inducing singlestep predictors for chaotic times series. For complex time series it is often the case that a large number of hidden units must be used to reliably acquire appropriate predictors. This paper describes an evolutionary method that evolves a class of dynamic systems with a form similar to neural networks but requiring fewer computational units. Results for experiments on two popular chaotic times series are described and the current method's performance is shown to compare favorably with using larger neural networks. Keywords: evolutionary computation, evolutionary programming, genetic programming, neural networks, chaotic time series prediction 1. INTRODUCTION What once were thought to be random, unpredictable sequences in science, technology, and nature are now newly identified as complex but yet deterministic and consequently predictable. Chaos, the science of nonlinear systems, has provided new tools and understandin...