Results 1  10
of
21
ANFIS: AdaptiveNetworkBased Fuzzy Inference System”,
 IEEE Trans. on System, Man and Cybernetics,
, 1993
"... ..."
Neurofuzzy modeling and control
 IEEE PROCEEDINGS
, 1995
"... Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framework of ad ..."
Abstract

Cited by 239 (1 self)
 Add to MetaCart
(Show Context)
Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks is called ANFIS (AdaptiveNetworkbased Fuzzy Inference System), which possess certain advantages over neural networks. We introduce the design methods for ANFIS in both modeling and control applications. Current problems and future directions for neurofuzzy approaches are also addressed.
Functional Equivalence between Radial Basis Function Networks and Fuzzy Inference Systems
, 1993
"... This short article shows that under some minor restrictions, the functional behavior of radial basis function networks and fuzzy inference systems are actually equivalent. This functional equivalence implies that advances in each literature, such as new learning rules or analysis on representational ..."
Abstract

Cited by 169 (4 self)
 Add to MetaCart
This short article shows that under some minor restrictions, the functional behavior of radial basis function networks and fuzzy inference systems are actually equivalent. This functional equivalence implies that advances in each literature, such as new learning rules or analysis on representational power, etc., can be applied to both models directly. It is of interest to observe that twomodels stemming from different origins turn out to be functional equivalent.
Regression Modeling in BackPropagation and Projection Pursuit Learning
, 1994
"... We studied and compared two types of connectionist learning methods for modelfree regression problems in this paper. One is the popular backpropagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
We studied and compared two types of connectionist learning methods for modelfree regression problems in this paper. One is the popular backpropagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions determined from interconnection weights. However, unlike the use of fixed nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically approximates the unknown nonlinear activations. Moreover, the BPL estimates all the weights simultaneously at each iteration, while the PPL estimates the weights cyclically (neuronbyneuron and layerbylayer) at each iteration. Although the BPL and the PPL have comparable training speed when based on a GaussNewton optimization algorithm, the PPL proves more parsimonious in that the PPL requires a fewer hi...
Human Emotion Recognition from Motion Using a Radial Basis Function Network Architecture
, 1994
"... In this paper a radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human emotions. We describe a hierarchical approach which at the highest level identifies emotions, at the mid level determines motion of facial features, and at ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
In this paper a radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human emotions. We describe a hierarchical approach which at the highest level identifies emotions, at the mid level determines motion of facial features, and at the low level recovers motion directions. Individual emotion networks were trained to recognize the `smile' and `surprise' emotions. Each emotion network was trained by viewing a set of sequences of one emotion for many subjects. The trained neural network was then tested for retention, extrapolation and rejection ability. Success rates were about 88% for retention, 73% for extrapolation, and 79% for rejection. 1 Introduction Visual communication plays a central role in human communication and interaction. This paper explores methods by which a computer can recognize visually communicated facial actions facial expressions. Developing such methods would contribute to humancomputer interacti...
Prediction of Chaotic Time Series with Neural Networks
 INT. J. BIFURCATION AND CHAOS
, 1992
"... This paper shows that the dynamics of nonlinear systems that produce complex time series can be captured in a model system. The model system is an artificial neural network, trained with backpropagation, in a multistep prediction framework. Results from the MackeyGlass (D=30) will be presented ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
This paper shows that the dynamics of nonlinear systems that produce complex time series can be captured in a model system. The model system is an artificial neural network, trained with backpropagation, in a multistep prediction framework. Results from the MackeyGlass (D=30) will be presented to corroborate our claim. Our final intent is to study the applicability of the method to the electroencephalogram, but first several important questions must be answered to guarantee appropriate modeling.
Learning Controllers for Industrial Robots
, 1996
"... . One of the most significant cost factors in robotics applications is the design and development of realtime robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of nonlinear controllers, although in man ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
. One of the most significant cost factors in robotics applications is the design and development of realtime robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of nonlinear controllers, although in many cases (such as in compliance control), nonlinear control is essential for achieving high performance. This paper discusses how Machine Learning has been applied to the design of (non)linear controllers. Several alternative function approximators, including Multilayer Perceptrons (MLP), Radial Basis Function Networks (RBFNs), and Fuzzy Controllers are analyzed and compared, leading to the definition of two major families: Open Field Function Function Approximators and Locally Receptive Field Function Approximators. It is shown that RBFNs and Fuzzy Controllers bear strong similarities, and that both have a symbolic interpretation. This characteristics allows for applying both symbolic and statis...
Extending the functional equivalence of radial basis function networks and fuzzy inference systems
 IEEE Transactions on Neural Networks
, 1996
"... AbstractWe establish the functional equivalence of a general 11. RBF NETWORKS ized class of Gaussian radial basis function (RBF’s) networks and an existing result which applies to the standard Gaussian RBF network and a restricted form of the TakagiSugeno fuzzy system. The more general framework a ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
AbstractWe establish the functional equivalence of a general 11. RBF NETWORKS ized class of Gaussian radial basis function (RBF’s) networks and an existing result which applies to the standard Gaussian RBF network and a restricted form of the TakagiSugeno fuzzy system. The more general framework allows the removal of some of the restrictive conditions of the previous result. the full TakagiSugeno model of fuzzy inference. This gener&es In Section we describe the generalized RBF network. The conditions under which it reduces to the standard network are given in Section 11B. The main features of the generalized RBF network are: 1) Each processing unit in the network receives as input I.
A Constructive Learning Algorithm for Local Model Networks
 in `Proceedings of the IEEE Workshop on ComputerIntensive Methods in Control and Signal Processing
, 1995
"... Local Model Networks are flexible architectures for the representation of complex nonlinear dynamic systems. The local nature of the representation leads to a modular network which can integrate a variety of paradigms (neural nets, statistics, fuzzy systems and a priori mathematical models), but be ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Local Model Networks are flexible architectures for the representation of complex nonlinear dynamic systems. The local nature of the representation leads to a modular network which can integrate a variety of paradigms (neural nets, statistics, fuzzy systems and a priori mathematical models), but because of the power of the local models, the architecture is less sensitive to the curse of dimensionality than other local representations, such as Radial Basis Function networks. The concept of `locality' is a difficult one to define, and tends to vary over a problem's input space, so a constructive structure identification algorithm is presented which automatically defines a suitable model structure on the basis of the observed data from the process being identified. Local learning algorithms are introduced for the local model parameter optimisation, which save computational effort and produce more interpretable and robust models. 1. Introduction Computationally intensive learning systems...
Nonmonotonic Activation Functions in Multilayer Perceptrons
, 1993
"... Multilayer perceptrons (MLPs) and radial basis function networks (RBFNs) are the two most common types of feedforward neural networks used for pattern classification and continuous function approximation. MLPs are characterized by slow learning speed, low memory retention, and small node requireme ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Multilayer perceptrons (MLPs) and radial basis function networks (RBFNs) are the two most common types of feedforward neural networks used for pattern classification and continuous function approximation. MLPs are characterized by slow learning speed, low memory retention, and small node requirements, while RBFNs are known to have high learning speed, high memory retention, but large node requirements. This dissertation asks and answers the question: "Can we do better?" Two types of neural network architectures are introduced: the hyperridge and the hyperhill. A hyperridge network is a perceptron with no hidden layers and an activation function in the form g(h) = sgn(c 2 \Gamma h 2 ) (h is the net input; c is a constant "width"), while a hyp...