Results 1  10
of
43
A new methodology of extraction, optimization and application of crisp and fuzzy logical rules
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical ..."
Abstract

Cited by 56 (24 self)
 Add to MetaCart
A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical rules. Algorithms for extraction of logical rules from data with realvalued features require determination of linguistic variables or membership functions. Contextdependent membership functions for crisp and fuzzy linguistic variables are introduced and methods of their determination described. Several neural and machine learning methods of logical rule extraction generating initial rules are described, based on constrained multilayer perceptron, networks with localized transfer functions or on separability criteria for determination of linguistic variables. A tradeoff between accuracy/simplicity is explored at the rule extraction stage and between rejection/error level at the optimization stage. Gaussian uncertainties of measurements are assumed during application of crisp logical rules, leading to “soft trapezoidal” membership functions and allowing to optimize the linguistic variables using gradient procedures. Numerous applications of this methodology to benchmark and reallife problems are reported and very simple crisp logical rules for many datasets provided.
Improving the Rprop Learning Algorithm
 PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON NEURAL COMPUTATION (NC 2000)
, 2000
"... The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks a ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks as well as for artificial error surfaces.
MetaLearning Evolutionary Artificial Neural Networks
 Journal, Elsevier Science, Netherlands
, 2003
"... In this paper, we present MLEANN (MetaLearning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its param ..."
Abstract

Cited by 45 (10 self)
 Add to MetaCart
(Show Context)
In this paper, we present MLEANN (MetaLearning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the comparative performance, we used three different wellknown chaotic time series. We also present the state of the art popular neural network learning algorithms and some experimentation results related to convergence speed and generalization performance. We explored the performance of backpropagation algorithm; conjugate gradient algorithm, quasiNewton algorithm and LevenbergMarquardt algorithm for the three chaotic time series. Performances of the different learning algorithms were evaluated when the activation functions and architecture were changed. We further present the theoretical background, algorithm, design strategy and further demonstrate how effective and inevitable is the proposed MLEANN framework to design a neural network, which is smaller, faster and with a better generalization performance.
Computational Intelligence Methods for RuleBased Data Understanding
 PROCEEDINGS OF THE IEEE
, 2004
"... ... This paper is focused on the extraction and use of logical rules for data understanding. All aspects of rule generation, optimization, and application are described, including the problem of finding good symbolic descriptors for continuous data, tradeoffs between accuracy and simplicity at the r ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
... This paper is focused on the extraction and use of logical rules for data understanding. All aspects of rule generation, optimization, and application are described, including the problem of finding good symbolic descriptors for continuous data, tradeoffs between accuracy and simplicity at the ruleextraction stage, and tradeoffs between rejection and error level at the rule optimization stage. Stability of rulebased description, calculation of probabilities from rules, and other related issues are also discussed. Major approaches to extraction of logical rules based on neural networks, decision trees, machine learning, and statistical methods are introduced. Optimization and application issues for sets of logical rules are described. Applications of such methods to benchmark and reallife problems are reported and illustrated with simple logical rules for many datasets. Challenges and new directions for research are outlined.
On Langevin Updating in Multilayer Perceptrons
 Neural Computation
, 1993
"... : The Langevin updating rule, in which noise is added to the weights during learning, is presented and analyzed. It is well controlled and, being a natural extension to standard backpropagation learning, easily combined with other modifications of backpropagation. If the Hessian matrix is numericall ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
: The Langevin updating rule, in which noise is added to the weights during learning, is presented and analyzed. It is well controlled and, being a natural extension to standard backpropagation learning, easily combined with other modifications of backpropagation. If the Hessian matrix is numerically illconditioned, Langevin updating converges faster than backpropagation and, probably, also higher order algorithms. This is particularly important for multilayer perceptrons with many hidden layers, which tend to have illconditioned Hessians. In addition, Manhattan updating is shown to have a similar effect as Langevin updating. 1 denni@thep.lu.se Introduction Performances of artificial neural networks (ANN) are often improved when external noise is present during the training phase. For instance in Hopfieldtype networks the basins of attraction for the stored memory patterns are enlarged when noisecorrupted training patterns are used [1]. In linear perceptrons the generalization a...
Speeding Up Backpropagation Algorithms By Using CrossEntropy Combined With Pattern Normalization
, 1997
"... This paper demonstrates how the backpropagation algorithm #BP# and its variants can ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
This paper demonstrates how the backpropagation algorithm #BP# and its variants can
Transfer Functions: Hidden Possibilities for Better Neural Networks.
 9th European Symposium on Artificial Neural Networks (ESANN), Brugge 2001. Defacto publications
, 2001
"... Sigmoidal or radial transfer functions do not guarantee the best generalization nor fast learning of neural networks. Families of parameterized transfer functions provide flexible decision borders. Networks based on such transfer functions should be small and accurate. Several possibilities of us ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
Sigmoidal or radial transfer functions do not guarantee the best generalization nor fast learning of neural networks. Families of parameterized transfer functions provide flexible decision borders. Networks based on such transfer functions should be small and accurate. Several possibilities of using transfer functions of different types in neural models are discussed, including enhancement of input features, selection of functions from a fixed pool, optimization of parameters of general type of functions, regularization of large networks with heterogeneous nodes and constructive approaches. A new taxonomy of transfer functions is proposed, allowing for derivation of known and new functions by additive or multiplicative combination of activation and output functions. 1
Fuzzy and Crisp Logical Rule Extraction Methods in Application to Medical Data.
, 1999
"... . $ FRPSUHKHQVLYH PHWKRGRORJ\ RI H[WUDFWLRQ RI RSWLPDO VHWV RI ORJLFDO UXOHV XVLQJ QHXUDO QHWZRUNV DQG JOREDO PLQLPL]DWLRQ SURFHGXUHV KDV EHHQ GHYHO# RSHG# ,QLWLDO UXOHV DUH H[WUDFWHG XVLQJ GHQVLW\ HVWLPDWLRQ QHXUDO QHWZRUNV ZLWK UHFWDQJXODU IXQFWLRQV RU PXOWL#OD\HUHG SHUFHSWURQ #0/3# QHWZRUNV WUDLQ ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
. $ FRPSUHKHQVLYH PHWKRGRORJ\ RI H[WUDFWLRQ RI RSWLPDO VHWV RI ORJLFDO UXOHV XVLQJ QHXUDO QHWZRUNV DQG JOREDO PLQLPL]DWLRQ SURFHGXUHV KDV EHHQ GHYHO# RSHG# ,QLWLDO UXOHV DUH H[WUDFWHG XVLQJ GHQVLW\ HVWLPDWLRQ QHXUDO QHWZRUNV ZLWK UHFWDQJXODU IXQFWLRQV RU PXOWL#OD\HUHG SHUFHSWURQ #0/3# QHWZRUNV WUDLQHG ZLWK FRQVWUDLQHG EDFNSURSDJDWLRQ DOJRULWKP# WUDQVIRUPLQJ 0/3V LQWR VLPSOHU QHWZRUNV SHUIRUPLQJ ORJLFDO IXQFWLRQV# $ FRQVWUXFWLYH DOJRULWKP FDOOHG &#0/3#/1 LV SUR# SRVHG# LQ ZKLFK UXOHV RI LQFUHDVLQJ VSHFLILFLW\ DUH JHQHUDWHG FRQVHFXWLYHO\ E\ DGGLQJ PRUH QRGHV WR WKH QHWZRUN# 1HXUDO UXOH H[WUDFWLRQ LV IROORZHG E\ RSWLPL# ]DWLRQ RI UXOHV XVLQJ JOREDO PLQLPL]DWLRQ WHFKQLTXHV# (VWLPDWLRQ RI FRQILGHQFH RI YDULRXV VHWV RI UXOHV LV GLVFXVVHG# 7KH K\EULG DSSURDFK WR UXOH H[WUDFWLRQ KDV EHHQ DSSOLHG WR D QXPEHU RI EHQFKPDUN DQG UHDO OLIH SUREOHPV ZLWK YHU\ JRRG UHVXOWV# ,Q PDQ\ FDVHV FULVS ORJLFDO UXOHV DUH TXLWH VDWLVIDFWRU\# EXW VRPHWLPHV IX]]\ UXOHV PD\ EH VLJQLILFDQWO\ PRUH DFFXUDWH# Keywords...
Hybrid NeuralGlobal Minimization Method of Logical Rule Extraction
, 1998
"... Methodology of extraction of optimal sets of logical rules using neural networks and global minimization procedures has been developed. Initial rules are extracted using density estimation neural networks with rectangular functions or multilayered perceptron (MLP) networks trained with constrained ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
Methodology of extraction of optimal sets of logical rules using neural networks and global minimization procedures has been developed. Initial rules are extracted using density estimation neural networks with rectangular functions or multilayered perceptron (MLP) networks trained with constrained backpropagation algorithm, transforming MLPs into simpler networks performing logical functions. A constructive algorithm called CMLP2LN is proposed, in which rules of increasing specificity are generated consecutively by adding more nodes to the network. Neural rule extraction is followed by optimization of rules using global minimization techniques. Estimation of confidence of various sets of rules is discussed. The hybrid approach to rule extraction has been applied to a number of benchmark and real life problems with very good results. Keywords: computational intelligence, neural networks, extraction of logical rules, data mining 1 Logical rules  introduction. Why should one use log...