Results 1 
8 of
8
ANFIS: AdaptiveNetworkBased Fuzzy Inference System
, 1993
"... This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping bas ..."
Abstract

Cited by 432 (5 self)
 Add to MetaCart
This paper presents the architecture and learning procedure underlying ANFIS (AdaptiveNetwork based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an inputoutput mapping based on both human knowledge (in the form of fuzzy ifthen rules) and stipulated inputoutput data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components onlinely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. 1 Introduction System modeling based on conventional mathematical tools (e.g., differential equations) is not well suited for dealing with illdefine...
Neurofuzzy modeling and control
 IEEE Proceedings
, 1995
"... Abstract  Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framew ..."
Abstract

Cited by 147 (1 self)
 Add to MetaCart
Abstract  Fundamental and advanced developments in neurofuzzy synergisms for modeling and control are reviewed. The essential part of neurofuzzy synergisms comes from a common framework called adaptive networks, which uni es both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks is called ANFIS (AdaptiveNetworkbased Fuzzy Inference System), which possess certain advantages over neural networks. We introduce the design methods for ANFIS in both modeling and control applications. Current problems and future directions for neurofuzzy approaches are also addressed. KeywordsFuzzy logic, neural networks, fuzzy modeling, neurofuzzy modeling, neurofuzzy control, ANFIS. I.
An algorithm for fast convergence in training neural networks
 Proceedings of the International Joint Conference on Neural Networks, 2:1778–1782
, 2001
"... In this work, two modifications on LevenbergMarquardt algorithm for feedforward neural networks are studied. One modification is made on performance index, while the other one is on calculating gradient information. The modified algorithm gives a better convergence rate compared to the standard Lev ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this work, two modifications on LevenbergMarquardt algorithm for feedforward neural networks are studied. One modification is made on performance index, while the other one is on calculating gradient information. The modified algorithm gives a better convergence rate compared to the standard LevenbergMarquard (LM) method and is less computationally intensive and requires less memory. The performance of the algorithm has been checked on several example problems. 1
Modified LevenbergMarquardt Method for Neural Networks Training
"... Abstract—In this paper a modification on LevenbergMarquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simul ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—In this paper a modification on LevenbergMarquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method. Keywords—LevenbergMarquardt, modification, neural network, variable learning rate. I.
Alternative discretetime operators and their application to nonlinear models
, 1997
"... The shift operator, defined as q x(t) = x(t+1), is the basis for almost all discretetime models. It has been shown however, that linear models based on the shift operator suffer problems when used to model lightlydampedlowfrequency (LDLF) systems, with poles near (1; 0) on the unit circle in th ..."
Abstract
 Add to MetaCart
The shift operator, defined as q x(t) = x(t+1), is the basis for almost all discretetime models. It has been shown however, that linear models based on the shift operator suffer problems when used to model lightlydampedlowfrequency (LDLF) systems, with poles near (1; 0) on the unit circle in the complex plane. This problem occurs under fast sampling conditions. As the sampling rate increases, coefficient sensitivity and roundoff noise become a problem as the difference between successive sampled inputs becomes smaller and smaller. The resulting coefficients of the model approach the coefficients obtained in a binomial expansion, regardless of the underlying continuoustime system. This implies that for a given finite wordlength, severe inaccuracies may result. Wordlengths for the coefficients may also need to be made longer to accommodate models which have low frequency characteristics, corresponding to poles in the neighbourhood of (1,0). These problems also arise in neural network models which comprise of linear parts and nonlinear neural activation functions. Various alternative discretetime operators can be introduced which offer numerical computational advantages over the conventional shift operator. The alternative discretetime operators have been proposed independently of each other in the fields of digital filtering, adaptive control and neural networks. These include the delta, rho, gamma and bilinear operators. In this paper we first review these operators and examine some of their properties. An analysis of the TDNN and FIR MLP network structures is given which shows their susceptibility to parameter sensitivity problems. Subsequently, it is shown that models may be formulated using alternative discretetime operators which have low sensitivity properties. Consideration is given to the problem of finding parameters for stable alternative discretetime operators. A learning algorithm which adapts the alternative discretetime operators parameters online is presented for MLP neural network models based on alternative discretetime operators. It is shown that neural network models which use these alternative discretetime perform better than those using the shift operator alone.
Proceedings of the IV Brazilian Conference on Neural Networks  IV Congresso Brasileiro de Redes Neurais pp. 247251, July 2022, 1999  ITA, So Jos dos Campos  SP  Brazil 247
"... An adaptive neural network training Kalman filtering algorithm is implemented. The XOR problem and a benchmark problem for diagnosis of breast cancer are used for testing and analysis of the algorithm behavior. Results show that the algorithm performs well on both problems having the desirable chara ..."
Abstract
 Add to MetaCart
An adaptive neural network training Kalman filtering algorithm is implemented. The XOR problem and a benchmark problem for diagnosis of breast cancer are used for testing and analysis of the algorithm behavior. Results show that the algorithm performs well on both problems having the desirable characteristics of beeing simple to implement, with parallel processing features and good numerical behavior due to the adaptive distribution of learning.
Neural Network based Complex Image Compression using Modified LevenbergMarquardt method for Learning 1
"... The emergence of artificial neural networks in image processing has led to improvements in image compression. In this paper an adaptive method for image compression based on complexity level of the image and modification on levenbergmarquardt algorithm for MLP neural network learning is presented. ..."
Abstract
 Add to MetaCart
The emergence of artificial neural networks in image processing has led to improvements in image compression. In this paper an adaptive method for image compression based on complexity level of the image and modification on levenbergmarquardt algorithm for MLP neural network learning is presented. In adaptive method different back propagation artificial neural networks are used as compressor and decompressor and it is achieved by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. The results demonstrate superiority of this method comparing with existing one. This paper is organized as follows, Section II discuss multilayer perceptron neural network and its adaptive approach that is directly developed for image compression. Section III describes the complexity measurement methods used in this paper. Section IV analyzes modified LM method for learning and convergence. Section V tells about the experimental results and Section VI views about the conclusion.