Results 1 - 10
of
126
Evolving Neural Networks through Augmenting Topologies
- Evolutionary Computation
"... An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task ..."
Abstract
-
Cited by 536 (112 self)
- Add to MetaCart
(Show Context)
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
A Taxonomy for Artificial Embryogeny
, 2003
"... A major challenge for evolutionary computation is to evolve phenotypes such as neural networks, sensory systems, or motor controllers at the same level of complexity as found in biological organisms. In order to meet this challenge, many researchers are proposing indirect encodings, that is, evoluti ..."
Abstract
-
Cited by 199 (51 self)
- Add to MetaCart
A major challenge for evolutionary computation is to evolve phenotypes such as neural networks, sensory systems, or motor controllers at the same level of complexity as found in biological organisms. In order to meet this challenge, many researchers are proposing indirect encodings, that is, evolutionary mechanisms where the same genes are used multiple times in the process of building a phenotype. Such gene reuse allows compact representations of very complex phenotypes. Development is a natural choice for implementing indirect encodings, if only because nature itself uses this very process. Motivated by the development of embryos in nature, we define Artificial Embryogeny (AE) as the subdiscipline of evolutionary computation (EC) in which phenotypes undergo a developmental phase. An increasing number of AE systems are currently being developed, and a need has arisen for a principled approach to comparing and contrasting, and ultimately building, such systems. Thus, in this paper, we develop a principled taxonomy for AE. This taxonomy provides a unified context for long-term research in AE, so that implementation decisions can be compared and contrasted along known dimensions in the design space of embryogenic systems. It also allows predicting how the settings of various AE parameters affect the capacity to efficiently evolve complex phenotypes.
Compositional pattern producing networks: A novel abstraction of development
, 2007
"... Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings, i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a ..."
Abstract
-
Cited by 122 (42 self)
- Add to MetaCart
Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings, i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a mature form. A major challenge in in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies. In this paper, a novel abstraction of natural development, called Compositional Pattern Producing Networks (CPPNs), is proposed. Unlike currently accepted abstractions such as iterative rewrite systems and cellular growth simulations, CPPNs map to the phenotype without local interaction, that is, each individual component of the phenotype is determined independently of every other component. Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.
Real-time neuroevolution in the nero video game
- IEEE Transactions on Evolutionary Computation
, 2005
"... In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This pap ..."
Abstract
-
Cited by 120 (37 self)
- Add to MetaCart
(Show Context)
In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the real-time NeuroEvolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the NeuroEvolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players ’ teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games. 1
Efficient reinforcement learning through evolving neural network topologies
- IN PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO-2002
, 2002
"... Neuroevolution is currently the strongest method on the pole-balancing benchmark reinforcement learning tasks. Although earlier studies suggested that there was an advantage in evolving the network topology as well as connection weights, the leading neuroevolution systems evolve fixed networks ..."
Abstract
-
Cited by 73 (20 self)
- Add to MetaCart
(Show Context)
Neuroevolution is currently the strongest method on the pole-balancing benchmark reinforcement learning tasks. Although earlier studies suggested that there was an advantage in evolving the network topology as well as connection weights, the leading neuroevolution systems evolve fixed networks. Whether evolving structure can improve performance is an open question. In this article, we introduce such a system, NeuroEvolution of Augmenting Topologies (NEAT). We show that when structure is evolved (1) with a principled method of crossover, (2) by protecting structural innovation, and (3) through incremental growth from minimal structure, learning is significantly faster and stronger than with the best fixed-topology methods. NEAT also shows that it is possible to evolve populations of increasingly large genomes, achieving highly complex solutions that would otherwise be difficult to optimize.
Ideal Evaluation from Coevolution
- Evolutionary Computation
, 2004
"... In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in game-playing. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult ..."
Abstract
-
Cited by 68 (6 self)
- Add to MetaCart
(Show Context)
In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in game-playing. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary Multi-Objective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for test-based problems is possible even when the underlying objectives of a problem are unknown.
Evolving neural network agents in the NERO video game
- In Proceedings of the IEEE 2005 Symposium on Computational Intelligence and Games
, 2005
"... Abstract- In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve during gameplay, keeping it interesting. This ..."
Abstract
-
Cited by 53 (15 self)
- Add to MetaCart
(Show Context)
Abstract- In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve during gameplay, keeping it interesting. This paper introduces the real-time NeuroEvolution of Augmenting Topologies (rt-NEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible a new genre of video games in which the player teaches a team of agents through a series of customized training exercises. In order to demonstrate this concept in the NeuroEvolving Robotic Operatives (NERO) game, the player trains a team of robots for combat. This paper describes results from this novel application of machine learning, and also demonstrates how multiple agents can evolve and adapt in video games like NERO in real time using rtNEAT. In the future, rtNEAT may allow new kinds of educational and training applications that adapt online as the user gains new skills. 1
Cooperative Coevolution of Multi-Agent Systems
, 2001
"... In certain tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior best be evolved? When the agents are controlled with neural networks, a powerful method is to coevolve them in separate subpopul ..."
Abstract
-
Cited by 53 (4 self)
- Add to MetaCart
(Show Context)
In certain tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior best be evolved? When the agents are controlled with neural networks, a powerful method is to coevolve them in separate subpopulations, and test together in the common task. In this paper, such a method, called Multi-Agent ESP (Enforced Subpopulations) is presented, and demonstrated in a prey-capture task. The approach is shown more efficient and robust than evolving a single central controller for all agents. The role of communication in such domains is also studied, and shown to be unnecessary and even detrimental if effective behavior in the task can be expressed as role-based cooperation rather than synchronization. 1