Results 1  10
of
51
ANFIS: adaptivenetworkbased fuzzy inference system,” in
 IEEE Transactions on Systems, Man and Cybernetics
, 1993
"... ..."
Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization
, 1993
"... ..."
Ensemble Learning using Decorrelated Neural Networks
 Connection Science
, 1996
"... We describe a decorrelation network training method for improving the quality of regression learning in "ensemble " neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reprodu ..."
Abstract

Cited by 77 (0 self)
 Add to MetaCart
(Show Context)
We describe a decorrelation network training method for improving the quality of regression learning in "ensemble " neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the "3 Parity" logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensembl...
A fast nearestneighbor algorithm based on a principal axis search tree
 IEEE T. Pattern. Anal
, 2001
"... AbstractÐA new fast nearestneighbor algorithm is described that uses principal component analysis to build an efficient search tree. At each node in the tree, the data set is partitioned along the direction of maximum variance. The search algorithm efficiently uses a depthfirst search and a new el ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
(Show Context)
AbstractÐA new fast nearestneighbor algorithm is described that uses principal component analysis to build an efficient search tree. At each node in the tree, the data set is partitioned along the direction of maximum variance. The search algorithm efficiently uses a depthfirst search and a new elimination criterion. The new algorithm was compared to 16 other fast nearestneighbor algorithms on three types of common benchmark data sets including problems from time series prediction and image vector quantization. This comparative study illustrates the strengths and weaknesses of all of the leading algorithms. The new algorithm performed very well on all of the data sets and was consistently ranked among the top three algorithms. Index TermsÐNearest neighbor, vector quantization encoding, principal components analysis, closest point, intrinsic dimension, post office problem. æ 1
New tools in nonlinear modelling and prediction
 Comput. Manag. Sci
, 2004
"... 1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7 ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7
Statistical Control of RBFlike Networks for Classification
 In 7th International Conference on Artificial Neural Networks
, 1997
"... . Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. In ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
. Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. IncNet Pro is similar to the Resource Allocation Network described by Platt in the main idea of the expanding the network. The statistical novel criterion is used to determine the growing point. The Biradial functions are used instead of radial basis functions to obtain more flexible network. 1 Introduction The Radial Basis Function (RBF) networks [13,12] were designed as a solution to an approximation problem in multidimensional spaces. The typical form of the RBF network can be written as f(x; w;p) = M X i=1 w i G i (jjxjj i ; p i ) (1) where M is the number of the neurons in hidden layer, G i (jjxjj i ; p i ) is the i th Radial Basis Function, p i are adjustable parameters such as...
Global Optimization for Artificial Neural Networks: A Tabu Search Application
"... The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for Neural Network research in Business. The attractiveness of neural network research stems from researchers' need to approximate models within the busine ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for Neural Network research in Business. The attractiveness of neural network research stems from researchers' need to approximate models within the business environment without having a priori knowledge about the true underlying function. Gradient techniques, such as backpropagation, are currently the most widely used methods for neural network optimization. Since these techniques search for local solutions, a global search algorithm is warranted. In this paper we examine a recently popularized optimization technique, Tabu Search, as a possible alternative to the problematic backpropagation. A Monte Carlo study was conducted to test the appropriateness of this global search technique for optimizing neural networks. Holding the neural network architecture constant, 530 independent runs were conducted for each of seven test functions, including a pr...
Towards LongTerm Prediction
, 2000
"... This paper describes a simple method of obtaining longerterm predictions from a nonlinear timeseries, assuming one already has a reasonably good shortterm predictor. The usefulness of the technique is that it eliminates, to some extent, the systematic errors of the iterated shortterm predictor. ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
This paper describes a simple method of obtaining longerterm predictions from a nonlinear timeseries, assuming one already has a reasonably good shortterm predictor. The usefulness of the technique is that it eliminates, to some extent, the systematic errors of the iterated shortterm predictor. The technique we describe also provides an indication of the prediction horizon. We consider systems with both observational and dynamic noise and analyse a number of artificial and experimental systems obtaining consistent results. We also compare this method of longerterm prediction with ensemble prediction.
A Smoothing Regularizer for Feedforward and Recurrent Neural Networks
, 1996
"... We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent conn ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent connections described by Y (t) = f \Gamma WY (t \Gamma ø) + V X(t) \Delta ; Z(t) = UY (t) ; the training criterion with the regularizer is D = 1 N N X t=1 jjZ(t) \Gamma Z (\Phi; I(t))jj 2 + ae ø 2 (\Phi) ; where \Phi = fU; V; Wg is the network parameter set, Z(t) are the targets, I(t) = fX(s); s = 1; 2; \Delta \Delta \Delta ; tg represents the current and all historical input information, N is the size of the training data set, ae ø 2 (\Phi) is the regularizer, and is a regularization parameter. The closedform expression for the regularizer for timelagged recurrent networks is: ae ø (\Phi) = fljjU jjjjV jj 1 \Gamma fljjW jj h 1 \Gamma e fljjW jj\Gamma1 ø i ; ...
Smoothing Regularizers for Projective Basis Function Networks
, 1996
"... Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widelyused sigmoidal PBFs, have heretofore th been proposed. We derive new classes of algebraicallysimple morder smoothing reg ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widelyused sigmoidal PBFs, have heretofore th been proposed. We derive new classes of algebraicallysimple morder smoothing regularizers for networks N T of projective basis functions f(W, :r) = 5: big [,c v 5 + v/0] + u0, with general transfer functions g[.]. These regularizers are: RG(m,m) = y}u}ll,Jll 2m GlobalForm RL(m,m) = y}u}ll,Jll 2m LocalForm With appropriate constant factors, these regularizers bound the corresponding mt*order smoothing integral Of(W,a: ) 2 In the above expressions, {v j} are the projection vectors, W denotes all the network weights {u j, u0, v j, v0}, and (x) is a weighting function (not necessarily the input density) on the Ddimensional input space. The global and local cases are distinguished by different choices of (x).