Results 1  10
of
177
ANFIS: AdaptiveNetworkBased Fuzzy Inference System
, 1993
"... This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping bas ..."
Abstract

Cited by 432 (5 self)
 Add to MetaCart
This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping based on both human knowledge (in the form of fuzzy ifthen rules) and stipulated inputoutput data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components onlinely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. 1 Introduction System modeling based on conventional mathematical tools (e.g., differential equations) is not well suited for dealing with illdefine...
A New Evolutionary System for Evolving Artificial Neural Networks
 IEEE Transactions on Neural Networks
, 1996
"... This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP) [1], [2], [3]. Unlike most previous studies on evolving ANNs, this paper puts its emphasis on evolvin ..."
Abstract

Cited by 156 (35 self)
 Add to MetaCart
This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP) [1], [2], [3]. Unlike most previous studies on evolving ANNs, this paper puts its emphasis on evolving ANN's behaviours. This is one of the primary reasons why EP is adopted. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviours. Close behavioural links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases 1 ) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANNs is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems (bre...
Neurofuzzy modeling and control
 IEEE Proceedings
, 1995
"... Abstract  Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framew ..."
Abstract

Cited by 147 (1 self)
 Add to MetaCart
Abstract  Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks is called ANFIS (AdaptiveNetworkbased Fuzzy Inference System), which possess certain advantages over neural networks. We introduce the design methods for ANFIS in both modeling and control applications. Current problems and future directions for neurofuzzy approaches are also addressed. KeywordsFuzzy logic, neural networks, fuzzy modeling, neurofuzzy modeling, neurofuzzy control, ANFIS. I.
The PoincaréBendixson Theorem for Monotone Cyclic Feedback Systems with Delay
 JOURNAL OF DIFFERENTIAL EQUATIONS
, 1996
"... We consider cyclic nearest neighbor systems of differential delay equations, in which the coupling between neighbors possesses a monotonicity property. Using a discrete (integervalued) Lyapunov function, we prove that the PoincaréBendixson theorem holds for such systems. We also obtain results on ..."
Abstract

Cited by 69 (5 self)
 Add to MetaCart
We consider cyclic nearest neighbor systems of differential delay equations, in which the coupling between neighbors possesses a monotonicity property. Using a discrete (integervalued) Lyapunov function, we prove that the PoincaréBendixson theorem holds for such systems. We also obtain results on piecewise monotonicity and stability of periodic solutions of such systems.
The Unscented Kalman Filter for nonlinear estimation
, 2000
"... The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and machine learning applications. These include estimating the state of a nonlinear dynamic system, estimating parameters for nonlinear system identification (e.g., learning the weights of a ne ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and machine learning applications. These include estimating the state of a nonlinear dynamic system, estimating parameters for nonlinear system identification (e.g., learning the weights of a neural network), and dual estimation (e.g., the Expectation Maximization (EM) algorithm) where both states and parameters are estimated simultaneously. This paper points out the flaws in using the EKF, and introduces an improvement, the Unscented Kalman Filter (UKF), proposed by Julier and Uhlman [5]. A central and vital operation performed in the Kalman Filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF, the state distribution is approximated
A practical method for calculating largest Lyapunov exponents from small data sets
 PHYSICA D
, 1993
"... Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close statespace trajectories and estimate the amount of chaos in a system. We present a new m ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close statespace trajectories and estimate the amount of chaos in a system. We present a new method for calculating the largest Lyapunov exponent from an experimental time series. The method follows directly from the definition of the largest Lyapunov exponent and is accurate because it takes advantage of all the available data. We show that the algorithm is fast, easy to implement, and robust to changes in the following quantities: embedding dimension, size of data set, reconstruction delay, and noise level. Furthermore, one may use the algorithm to calculate simultaneously the correlation dimension. Thus, one sequence of computations will yield an estimate of both the level of chaos and the system complexity.
The Kernel Recursive Least Squares Algorithm
 IEEE Transactions on Signal Processing
, 2003
"... We present a nonlinear kernelbased version of the Recursive Least Squares (RLS) algorithm. Our KernelRLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared error regressor. Spars ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
We present a nonlinear kernelbased version of the Recursive Least Squares (RLS) algorithm. Our KernelRLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared error regressor. Sparsity of the solution is achieved by a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be suffciently well approximated by combining the images of previously admitted samples. This sparsification procedure is crucial to the operation of KRLS, as it allows it to operate online, and by effectively regularizing its solutions. A theoretical analysis of the sparsification method reveals its close affinity to kernel PCA, and a datadependent loss bound is presented, quantifying the generalization performance of the KRLS algorithm. We demonstrate the performance and scaling properties of KRLS and compare it to a stateof theart Support Vector Regression algorithm, using both synthetic and real data. We additionally test KRLS on two signal processing problems in which the use of traditional leastsquares methods is commonplace: Time series prediction and channel equalization.
SigmaPoint Kalman Filters for Probabilistic Inference in Dynamic StateSpace Models
 In Proceedings of the Workshop on Advances in Machine Learning
, 2003
"... Probabilistic inference is the problem of estimating the hidden states of a system in an optimal and consistent fashion given a set of noisy or incomplete observations. The optimal solution to this problem is given by the recursive Bayesian estimation algorithm which recursively updates the post ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
Probabilistic inference is the problem of estimating the hidden states of a system in an optimal and consistent fashion given a set of noisy or incomplete observations. The optimal solution to this problem is given by the recursive Bayesian estimation algorithm which recursively updates the posterior density of the system state as new observations arrive online.
Anomaly Detection Using RealValued Negative Selection
 Journal of Genetic Programming and Evolvable Machines
, 2004
"... This paper describes a realvalued representation for the negative selection algorithm and its applications to anomaly detection. In many anomaly detection applications, only positive (normal) samples are available for training purpose. However, conventional classification algorithms need samples fo ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
This paper describes a realvalued representation for the negative selection algorithm and its applications to anomaly detection. In many anomaly detection applications, only positive (normal) samples are available for training purpose. However, conventional classification algorithms need samples for all classes (e.g. normal and abnormal) during the training phase. This approach uses only normal samples to generate abnormal samples, which are used as input to a classification algorithm. This hybrid approach is compared against an anomaly detection technique that uses selforganizing maps to cluster the normal data sets (samples). Experiments are performed with di#erent data sets and some results are reported.
Simultaneous Training of Negatively Correlated Neural Networks in an Ensemble
 IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
, 1999
"... This paper presents a new cooperative ensemble learning system (CELS) for designing neural network ensembles. The idea behind CELS is to encourage different individual networks in an ensemble to learn different parts or aspects of a training data so that the ensemble can learn the whole training dat ..."
Abstract

Cited by 44 (20 self)
 Add to MetaCart
This paper presents a new cooperative ensemble learning system (CELS) for designing neural network ensembles. The idea behind CELS is to encourage different individual networks in an ensemble to learn different parts or aspects of a training data so that the ensemble can learn the whole training data better. In CELS, the individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for the individual networks to interact with each other and to specialize. CELS can create negatively correlated neural networks using a correlation penalty term in the error function to encourage such specialization. This paper analyzes CELS in terms of biasvariancecovariance tradeoff. CELS has also been tested on the MackeyGlass time series prediction problem and the Australian credit card assessment problem. The experimental results show that CELS can produce neural network ensembles with good generalization ability.