Results 1  10
of
15
ANFIS: AdaptiveNetworkBased Fuzzy Inference System
, 1993
"... This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping bas ..."
Abstract

Cited by 432 (5 self)
 Add to MetaCart
This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping based on both human knowledge (in the form of fuzzy ifthen rules) and stipulated inputoutput data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components onlinely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. 1 Introduction System modeling based on conventional mathematical tools (e.g., differential equations) is not well suited for dealing with illdefine...
Neurofuzzy modeling and control
 IEEE Proceedings
, 1995
"... Abstract  Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framew ..."
Abstract

Cited by 147 (1 self)
 Add to MetaCart
Abstract  Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks is called ANFIS (AdaptiveNetworkbased Fuzzy Inference System), which possess certain advantages over neural networks. We introduce the design methods for ANFIS in both modeling and control applications. Current problems and future directions for neurofuzzy approaches are also addressed. KeywordsFuzzy logic, neural networks, fuzzy modeling, neurofuzzy modeling, neurofuzzy control, ANFIS. I.
Functional Equivalence between Radial Basis Function Networks and Fuzzy Inference Systems
, 1993
"... This short article shows that under some minor restrictions, the functional behavior of radial basis function networks and fuzzy inference systems are actually equivalent. This functional equivalence implies that advances in each literature, such as new learning rules or analysis on representational ..."
Abstract

Cited by 126 (4 self)
 Add to MetaCart
This short article shows that under some minor restrictions, the functional behavior of radial basis function networks and fuzzy inference systems are actually equivalent. This functional equivalence implies that advances in each literature, such as new learning rules or analysis on representational power, etc., can be applied to both models directly. It is of interest to observe that twomodels stemming from different origins turn out to be functional equivalent.
Regression Modeling in BackPropagation and Projection Pursuit Learning
, 1994
"... We studied and compared two types of connectionist learning methods for modelfree regression problems in this paper. One is the popular backpropagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
We studied and compared two types of connectionist learning methods for modelfree regression problems in this paper. One is the popular backpropagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions determined from interconnection weights. However, unlike the use of fixed nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically approximates the unknown nonlinear activations. Moreover, the BPL estimates all the weights simultaneously at each iteration, while the PPL estimates the weights cyclically (neuronbyneuron and layerbylayer) at each iteration. Although the BPL and the PPL have comparable training speed when based on a GaussNewton optimization algorithm, the PPL proves more parsimonious in that the PPL requires a fewer hi...
Prediction of Chaotic Time Series with Neural Networks
 INT. J. BIFURCATION AND CHAOS
, 1992
"... This paper shows that the dynamics of nonlinear systems that produce complex time series can be captured in a model system. The model system is an artificial neural network, trained with backpropagation, in a multistep prediction framework. Results from the MackeyGlass (D=30) will be presented ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
This paper shows that the dynamics of nonlinear systems that produce complex time series can be captured in a model system. The model system is an artificial neural network, trained with backpropagation, in a multistep prediction framework. Results from the MackeyGlass (D=30) will be presented to corroborate our claim. Our final intent is to study the applicability of the method to the electroencephalogram, but first several important questions must be answered to guarantee appropriate modeling.
Learning Controllers for Industrial Robots
, 1996
"... . One of the most significant cost factors in robotics applications is the design and development of realtime robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of nonlinear controllers, although in many cas ..."
Abstract

Cited by 27 (14 self)
 Add to MetaCart
. One of the most significant cost factors in robotics applications is the design and development of realtime robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of nonlinear controllers, although in many cases (such as in compliance control), nonlinear control is essential for achieving high performance. This paper discusses how Machine Learning has been applied to the design of (non)linear controllers. Several alternative function approximators, including Multilayer Perceptrons (MLP), Radial Basis Function Networks (RBFNs), and Fuzzy Controllers are analyzed and compared, leading to the definition of two major families: Open Field Function Function Approximators and Locally Receptive Field Function Approximators. It is shown that RBFNs and Fuzzy Controllers bear strong similarities, and that both have a symbolic interpretation. This characteristics allows for applying both symbolic and statis...
A Constructive Learning Algorithm for Local Model Networks
 in `Proceedings of the IEEE Workshop on ComputerIntensive Methods in Control and Signal Processing
, 1995
"... Local Model Networks are flexible architectures for the representation of complex nonlinear dynamic systems. The local nature of the representation leads to a modular network which can integrate a variety of paradigms (neural nets, statistics, fuzzy systems and a priori mathematical models), but be ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Local Model Networks are flexible architectures for the representation of complex nonlinear dynamic systems. The local nature of the representation leads to a modular network which can integrate a variety of paradigms (neural nets, statistics, fuzzy systems and a priori mathematical models), but because of the power of the local models, the architecture is less sensitive to the curse of dimensionality than other local representations, such as Radial Basis Function networks. The concept of `locality' is a difficult one to define, and tends to vary over a problem's input space, so a constructive structure identification algorithm is presented which automatically defines a suitable model structure on the basis of the observed data from the process being identified. Local learning algorithms are introduced for the local model parameter optimisation, which save computational effort and produce more interpretable and robust models. 1. Introduction Computationally intensive learning systems...
Nonmonotonic Activation Functions in Multilayer Perceptrons
, 1993
"... Multilayer perceptrons (MLPs) and radial basis function networks (RBFNs) are the two most common types of feedforward neural networks used for pattern classification and continuous function approximation. MLPs are characterized by slow learning speed, low memory retention, and small node requireme ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Multilayer perceptrons (MLPs) and radial basis function networks (RBFNs) are the two most common types of feedforward neural networks used for pattern classification and continuous function approximation. MLPs are characterized by slow learning speed, low memory retention, and small node requirements, while RBFNs are known to have high learning speed, high memory retention, but large node requirements. This dissertation asks and answers the question: "Can we do better?" Two types of neural network architectures are introduced: the hyperridge and the hyperhill. A hyperridge network is a perceptron with no hidden layers and an activation function in the form g(h) = sgn(c 2 \Gamma h 2 ) (h is the net input; c is a constant "width"), while a hyp...
Growing Radial Basis Function Networks
 In Proceedings of the Fourth Workshop on Learning Robots
, 1995
"... . This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous method developed by Fritzke, while the second one has been inspired by the CART algorithm developed by Breiman for generation regression trees. Both algorithms proved to work well on a number of tests and exhibit comparable performances. An evaluation on the standard case study of the MackeyGlass temporal series is reported. Key Words. Machine Learning, Robotics, Neural Nets 1 INTRODUCTION Recent developments in control techniques based on Artificial Neural Networks, posed the attention on a broad class of function approximators called Locally Receptive Field Networks (LRFNs). The reason is that LRFNs show properties which make them more suitable to adaptive control applications than othe...
Local Learning In Local Model Networks
 in Proc. 4th IEE Int. Conference on Artificial Neural Networks
, 1994
"... Local Model Networks are hybrid models which allow the easy integration of a priori knowledge, as well as the ability to learn from data to represent complex, multidimensional dynamic systems from data. This paper points out problems with global learning methods in Local Model Networks. The bias/var ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Local Model Networks are hybrid models which allow the easy integration of a priori knowledge, as well as the ability to learn from data to represent complex, multidimensional dynamic systems from data. This paper points out problems with global learning methods in Local Model Networks. The bias/variance tradeoffs for local and global learning are examined, and it is illustrated that local learning has a regularizing effect that can make it favorable compared to global learning in some cases.