Results 1  10
of
27
A new class of wavelet networks for nonlinear system identification
 IEEE Trans. Neural Netw
, 2005
"... Abstract—A new class of wavelet networks (WNs) is proposed for nonlinear system identification. In the new networks, the model structure for a highdimensional system is chosen to be a superimposition of a number of functions with fewer variables. By expanding each function using truncated wavelet d ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
(Show Context)
Abstract—A new class of wavelet networks (WNs) is proposed for nonlinear system identification. In the new networks, the model structure for a highdimensional system is chosen to be a superimposition of a number of functions with fewer variables. By expanding each function using truncated wavelet decompositions, the multivariate nonlinear networks can be converted into linearintheparameter regressions, which can be solved using leastsquares type methods. An efficient model term selection approach based upon a forward orthogonal least squares (OLS) algorithm and the error reduction ratio (ERR) is applied to solve the linearintheparameters problem in the present study. The main advantage of the new WN is that it exploits the attractive features of multiscale wavelet decompositions and the capability of traditional neural networks. By adopting the analysis of variance (ANOVA) expansion, WNs can now handle nonlinear identification problems in high dimensions. Index Terms—Nonlinear autoregressive with exogenous inputs (NARX) models, nonlinear system identification, orthogonal least squares (OLS), wavelet networks (WNs). I.
Evolving neural networks for strategic decisionmaking problems
 Neural Networks, Special Issue on GoalDirected Neural Systems
, 2009
"... Evolution of neural networks, or neuroevolution, has been a successful approach to many lowlevel control problems such as pole balancing, vehicle control, and collision warning. However, certain types of problems – such as those involving strategic decisionmaking – have remained difficult for neur ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
Evolution of neural networks, or neuroevolution, has been a successful approach to many lowlevel control problems such as pole balancing, vehicle control, and collision warning. However, certain types of problems – such as those involving strategic decisionmaking – have remained difficult for neuroevolution to solve. This paper evaluates the hypothesis that such problems are difficult because they are fractured: The correct action varies discontinuously as the agent moves from state to state. A method for measuring fracture using the concept of function variation is proposed, and based on this concept, two methods for dealing with fracture are examined: neurons with local receptive fields, and refinement based on a cascaded network architecture. Experiments in several benchmark domains are performed to evaluate how different levels of fracture affect the performance of neuroevolution methods, demonstrating that these two modifications improve performance significantly. These results form a promising starting point for expanding neuroevolution to strategic tasks. 1.
Evolutionary optimization of radial basis function classifiers for data mining applications
 IEEE Transactions on Systems Man and Cybernetics Part BCybernetics
, 2005
"... Abstract—In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the clas ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperaturebased adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EAbased RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms. Index Terms—Data mining, evolutionary algorithm (EA), feature selection, model selection, radial basis function (RBF) network. I.
Parallel Multiobjective Memetic RBFNNs Design and Feature Selection for Function Approximation Problems
"... Abstract. The design of Radial Basis Function Neural Networks (RBFNNs) still remains as a difficult task when they are applied to classification or to regression problems. The difficulty arises when the parameters that define an RBFNN have to be set, these are: the number of RBFs, the position of th ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The design of Radial Basis Function Neural Networks (RBFNNs) still remains as a difficult task when they are applied to classification or to regression problems. The difficulty arises when the parameters that define an RBFNN have to be set, these are: the number of RBFs, the position of their centers and the length of their radii. Another issue that has to be faced when applying these models to real world applications is to select the variables that the RBFNN will use as inputs. The literature presents several methodologies to perform these two tasks separately, however, due to the intrinsic parallelism of the genetic algorithms, a parallel implementation will allow the algorithm proposed in this paper to evolve solutions for both problems at the same time. The parallelization of the algorithm not only consists in the evolution of the two problems but in the specialization of the crossover and mutation operators in order to evolve the different elements to be optimized when designing RBFNNs. The subjacent Genetic Algorithm is the NonSorting Dominated Genetic Algorithm II (NSGAII) that helps to keep a balance between the size of the network and its approximation accuracy in order to avoid overtraining networks. Another of the novelties of the proposed algorithm is the incorporation of local search algorithms in three stages of the algorithm: initialization of the population, evolution of the individuals, and final optimization of the Pareto front. The initialization of the individuals is performed hybridizing clustering techniques with the Mutual Information theory (MI) to select the input variables. As the experiment will show, the synergy of the different paradigms and techniques combined by the presented algorithm allow to obtain very accurate models using the most significant input variables.
Pareto Evolutionary Neural Networks
 IEEE Transactions on Neural Networks
, 2003
"... For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimisation. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users ma ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimisation. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimisation. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case.
Particle swarm optimization aided orthogonal forward regression for unified data modelling
 IEEE TRANS. EVOLUTION. COMPUT
, 2010
"... We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leaveoneout (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in twoclass classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunablenode RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixednode RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.
Construction of Tunable Radial Basis Function Networks Using Orthogonal Forward Selection
"... Abstract—An orthogonal forward selection (OFS) algorithm based on leaveoneout (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract—An orthogonal forward selection (OFS) algorithm based on leaveoneout (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance matrix, by minimizing the LOO statistics. For regression application, the LOO criterion is chosen to be the LOO meansquare error, while the LOO misclassification rate is adopted in twoclass classification application. This OFSLOO algorithm is computationally efficient, and it is capable of constructing parsimonious RBF networks that generalize well. Moreover, the proposed algorithm is fully automatic, and the user does not need to specify a termination criterion for the construction process. The effectiveness of the proposed RBF network construction procedure is demonstrated using examples taken from both regression and classification applications. Index Terms—Classification, leaveoneout (LOO) statistics, orthogonal forward selection (OFS), radial basis function (RBF) network, regression, tunable node. I.
Improving the Performance of Multiobjective Genetic Algorithm for Function Approximation Through Parallel Islands Specialisation
 Lecture Notes in Artificial Intelligence
"... Abstract. Nature shows many examples where the specialisation of elements aimed to solve different problems is successful. There are explorer ants, worker bees, etc., where a group of individuals is assigned a specific task. This paper will extrapolate this philosophy, applying it to a multiobject ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Nature shows many examples where the specialisation of elements aimed to solve different problems is successful. There are explorer ants, worker bees, etc., where a group of individuals is assigned a specific task. This paper will extrapolate this philosophy, applying it to a multiobjective genetic algorithm. The problem to be solved is the design of Radial Basis Function Neural Networks (RBFNNs) that approximate a function. A non distributed multiobjective algorithm will be compared against a parallel approach that emerges as a straight forward specialisation of the crossover and mutation operators in different islands. The experiments will show how, as in the real world, if the different islands evolve specific aspects of the RBFNNs, the results are improved. 1
Evolving neural networks for fractured domains
 In Proceedings of the Genetic and Evolutionary Computation Conference
, 2008
"... Evolution of neural networks, or neuroevolution, bas been successful on many lowlevel control problems such as pole balancing, vehicle control, and collision warning. However, highlevel strategy problems that require the integration of multiple subbehaviors have remained difficult for neuroevolut ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Evolution of neural networks, or neuroevolution, bas been successful on many lowlevel control problems such as pole balancing, vehicle control, and collision warning. However, highlevel strategy problems that require the integration of multiple subbehaviors have remained difficult for neuroevolution to solve. This paper proposes the hypothesis that such problems are difficult because they are fractured: the correct action varies discontinuously as the agent moves from state to state. This hypothesis is evaluated on several examples of fractured highlevel reinforcement learning domains. Standard neuroevolution methods such as NEAT indeed have difficulty solving them. However, a modification of NEAT that uses radial basis function (RBF) nodes to make precise local mutations to network output is able to do much better. These results provide a better understanding of the different types of reinforcement learning problems and the limitations of current neuroevolution methods. Thus, they lay the groundwork for creating the next generation of neuroevolution algorithms that can learn strategic highlevel behavior in fractured domains.
A FuzzyPossibilistic Fuzzy Ruled Clustering Algorithm for RBFNNs Design
 Human IT. Borås Högskola
, 2000
"... Abstract. This paper presents a new approach to the problem of designing Radial Basis Function Neural Networks (RBFNNs) to approximate a given function. The presented algorithm focuses in the first stage of the design where the centers of the RBFs have to be placed. This task has been commonly solve ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a new approach to the problem of designing Radial Basis Function Neural Networks (RBFNNs) to approximate a given function. The presented algorithm focuses in the first stage of the design where the centers of the RBFs have to be placed. This task has been commonly solved by applying generic clustering algorithms although in other cases, some specific clustering algorithms were considered. These specific algorithms improved the performance by adding some elements that allow them to use the information provided by the output of the function to be approximated but they did not add problem specific knowledge. The novelty of the new developed algorithm is the combination of a fuzzypossibilistic approach with a supervising parameter and the addition of a new migration step that, through the generation of RBFNNs, is able to take proper decisions on where to move the centers. The algorithm also introduces a fuzzy logic element by setting a fuzzy rule that determines the input vectors that influence each center position, this fuzzy rule considers the output of the function to be approximated and the fuzzypossibilistic partition of the data. 1