Results 1 
7 of
7
A New Evolutionary System for Evolving Artificial Neural Networks
 IEEE Transactions on Neural Networks
, 1996
"... This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP) [1], [2], [3]. Unlike most previous studies on evolving ANNs, this paper puts its emphasis on evolvin ..."
Abstract

Cited by 156 (35 self)
 Add to MetaCart
This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP) [1], [2], [3]. Unlike most previous studies on evolving ANNs, this paper puts its emphasis on evolving ANN's behaviours. This is one of the primary reasons why EP is adopted. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviours. Close behavioural links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases 1 ) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANNs is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems (bre...
Improving the Rprop Learning Algorithm
 PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON NEURAL COMPUTATION (NC 2000)
, 2000
"... The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks a ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks as well as for artificial error surfaces.
A Comparison of Matrix Rewriting Versus Direct Encoding for Evolving Neural Networks
, 1998
"... The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano [1] published an encoding scheme based on contextfree parallel matrix rewriting. The method allow ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano [1] published an encoding scheme based on contextfree parallel matrix rewriting. The method allowed compact, finite, chromosomes to grow neural networks of potentially infinite size. Results were presented that demonstrated superior evolutionary properties of the matrix rewriting method compared to a simple direct encoding. In this paper, we present results that contradict those findings, and demonstrate that a genetic algorithm (GA) using a direct encoding can find good individuals just as efficiently as a GA using matrix rewriting. I. Introduction The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano [1] published an encoding scheme base...
Efficient Evolution of Asymmetric Recurrent Neural Networks Using PDGPInspired . . .
, 1998
"... Recurrent neural networks are particularly useful for processing time sequences and simulating dynamical systems. However, methods for building recurrent architectures have been hindered by the fact that available training algorithms are considerably more complex than those for feedforward networ ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Recurrent neural networks are particularly useful for processing time sequences and simulating dynamical systems. However, methods for building recurrent architectures have been hindered by the fact that available training algorithms are considerably more complex than those for feedforward networks. In this paper
Evolving Neural Networks Using a Dual Representation with a Combined Crossover Operator
, 1998
"... In this paper a new approach to the evolution of neural networks is presented. A linear chromosome combined with a gridbased representation of the network, and a new crossover operator, allow the evolution of the architecture and the weights simultaneously. In our approach there is no need for a se ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
In this paper a new approach to the evolution of neural networks is presented. A linear chromosome combined with a gridbased representation of the network, and a new crossover operator, allow the evolution of the architecture and the weights simultaneously. In our approach there is no need for a separate weight optimization procedure and networks with more than one type of activation function can be evolved. A pruning strategy is also introduced, which leads to the generation of solutions with varying degrees of complexity. lesults of the application of the method to several binary classification problems are reported.
ForwardBackward Building Blocks for Evolving Neural Networks With Intrinsic Learning Behaviours
, 1997
"... . This paper describes the forwardbackward module: a simple building block that allows the evolution of neural networks with intrinsic supervised learning ability. This expands the range of networks that can be efficiently evolved compared to previous approaches, and also enables the networks to be ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
. This paper describes the forwardbackward module: a simple building block that allows the evolution of neural networks with intrinsic supervised learning ability. This expands the range of networks that can be efficiently evolved compared to previous approaches, and also enables the networks to be invertible i.e. once a network has been evolved for a given problem domain, and trained on a particular dataset, the network can then be run backwards to observe what kind of mapping has been learned, or for use in control problems. A demonstration is given of the kind of selftraining networks that could be evolved. 1 Introduction Despite much research into evolving neural networks, there is relatively little work published on evolving learning behaviours for neural networks. Within this area we can identify four kinds of approach:  Write down a set of learning rules with variable parameters for a given network architecture, then evolve the choice of rules and their parameters.  Evo...
CONFIGURING MULTIAGENT SYSTEMS FOR SOFTSENSING PROCESSING IN INDUSTRIAL AND OTHER ENVIRONMENTS ∗
"... ABSTRACT. Possibilities for implementation of intelligent agents as distributed systems in softsensing applications are introduced. The research includes different options with suitable neural networks serving as bases for intelligent agents of different types. The investigation in the paper is orie ..."
Abstract
 Add to MetaCart
ABSTRACT. Possibilities for implementation of intelligent agents as distributed systems in softsensing applications are introduced. The research includes different options with suitable neural networks serving as bases for intelligent agents of different types. The investigation in the paper is oriented towards solutions of highly sophisticated softsensing problems in dynamically changing environments of industrial implementations but it is applicable also in other environments, Internet for example. 1.