Results 1  10
of
12
Monotonic solution concepts in coevolution
, 2005
"... Assume a coevolutionary algorithm capable of storing and utilizing all phenotypes discovered during its operation, for as long as it operates on a problem; that is, assume an algorithm with a monotonically increasing knowledge of the search space. We ask: If such an algorithm were to periodically re ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Assume a coevolutionary algorithm capable of storing and utilizing all phenotypes discovered during its operation, for as long as it operates on a problem; that is, assume an algorithm with a monotonically increasing knowledge of the search space. We ask: If such an algorithm were to periodically report, over the course of its operation, the best solution found so far, would the quality of the solution reported by the algorithm improve monotonically over time? To answer this question, we construct a simple preference relation to reason about the goodness of different individual and composite phenotypic behaviors. We then show that whether the solutions reported by the coevolutionary algorithm improve monotonically with respect to this preference relation depends upon the solution concept implemented by the algorithm. We show that the solution concept implemented by the conventional coevolutionary algorithm does not guarantee monotonic improvement; in contrast, the gametheoretic solution concept of Nash equilibrium does guarantee monotonic improvement. Thus, this paper considers 1) whether global and objective metrics of goodness can be applied to coevolutionary problem domains (possibly with openended search spaces), and 2) whether coevolutionary algorithms can, in principle, optimize with respect to such metrics and find solutions to games of strategy.
Acquiring evolvability through adaptive representations
 In Proc. of Genetic and Evolutionary Computation Conference
, 2007
"... Adaptive representations allow evolution to explore the space of phenotypes by choosing the most suitable set of genotypic parameters. Although such an approach is believed to be efficient on complex problems, few empirical studies have been conducted in such domains. In this paper, three neural net ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Adaptive representations allow evolution to explore the space of phenotypes by choosing the most suitable set of genotypic parameters. Although such an approach is believed to be efficient on complex problems, few empirical studies have been conducted in such domains. In this paper, three neural network representations, a direct encoding, a complexifying encoding, and an implicit encoding capable of adapting the genotypephenotype mapping are compared on Nothello, a complex game playing domain from the AAAI General Game Playing Competition. Implicit encoding makes the search more efficient and uses several times fewer parameters. Random mutation leads to highly structured phenotypic variation that is acquired during the course of evolution rather than built into the representation itself. Thus, adaptive representations learn to become evolvable, and furthermore do so in a way that makes search efficient on difficult coevolutionary problems.
Measuring Generalization Performance in Coevolutionary Learning
"... Coevolutionary learning involves a training process where training samples are instances of solutions that interact strategically to guide the evolutionary (learning) process. One main research issue is with the generalization performance, i.e., the search for solutions (e.g., inputoutput mappings ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Coevolutionary learning involves a training process where training samples are instances of solutions that interact strategically to guide the evolutionary (learning) process. One main research issue is with the generalization performance, i.e., the search for solutions (e.g., inputoutput mappings) that best predict the required output for any new input that has not been seen during the evolutionary process. However, there is currently no such framework for determining the generalization performance in coevolutionary learning even though the notion of generalization is wellunderstood in machine learning. In this paper, we introduce a theoretical framework to address this research issue. We present the framework in terms of gameplaying although our results are more general. Here, a strategy’s generalization performance is its average performance against all test strategies. Given that the true value may not be determined by solving analytically a closedform formula and is computationally prohibitive, we propose an estimation procedure that computes the average performance against a small sample of random test strategies instead. We perform a mathematical analysis to provide a statistical claim on the accuracy of our estimation procedure, which can be further improved by performing a second estimation on the variance of the random variable. For gameplaying, it is wellknown that one is more interested in the generalization
DECA: Dimension Extracting Coevolutionary Algorithm
 In Proceedings of the 8th annual Genetic and Evolutionary Computation Conference
, 2006
"... Coevolution has often been based on averaged outcomes, resulting in unstable evaluation. Several theoretical approaches have used archives to provide stable evaluation. However, the number of tests required by some of these approaches can be prohibitive of practical applications. Recent work has sho ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Coevolution has often been based on averaged outcomes, resulting in unstable evaluation. Several theoretical approaches have used archives to provide stable evaluation. However, the number of tests required by some of these approaches can be prohibitive of practical applications. Recent work has shown the existence of a set of underlying objectives which compress evaluation information into a potentially small set of dimensions. We consider whether these underlying objectives can be approximated online, and used for evaluation in a coevolution algorithm. The Dimension Extracting Coevolutionary Algorithm (DECA) is compared to several recent reliable coevolution algorithms on a Numbers game problem, and found to perform efficiently. Application to the more realistic Tartarus problem is shown to be feasible. Implications for current coevolution research are discussed.
Emergent Geometric Organization and Informative Dimensions in Coevolutionary Algorithms
, 2007
"... To my parents, who gifted me with curiosity and the stubbornness to follow where it leads. Acknowledgments It almost goes without saying that a piece of work this size could not have been finished without the help of innumerable people. I wanted to extend my gratitude to those who had the greatest i ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
To my parents, who gifted me with curiosity and the stubbornness to follow where it leads. Acknowledgments It almost goes without saying that a piece of work this size could not have been finished without the help of innumerable people. I wanted to extend my gratitude to those who had the greatest impact on my thinking and writing over the past eight years. First, to my advisor Jordan Pollack. Jordan’s visionary quest for “mindless intelligence, ” a search for artificial intelligence without modeling the human brain or mind, kept me engaged with a set of ideas and a set of people I would otherwise never have encountered. Jordan’s enthusiasm and deep understanding, not to mention his uncanny ability to direct attention to fertile areas of research, are truly contagious and inspiring. To Timothy Hickey and Marc Toussaint, who sat on my dissertation examining committee. Tim’s careful scrutiny uncovered what would have been an embarrassing error. Marc, who has been interested in my work for several years, painstakingly scoured the entirety of this dissertation, offering a myriad small and large improvements along with suggestive interpretations of these ideas from perspectives I had not considered. v
Why coevolution doesn’t “work”: superiority and progress in coevolution
, 2009
"... Abstract. Coevolution often gives rise to counterintuitive dynamics that defy our expectations. Here we suggest that much of the confusion surrounding coevolution results from imprecise notions of superiority and progress. In particular, we note that in the literature, three distinct notions of pro ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Coevolution often gives rise to counterintuitive dynamics that defy our expectations. Here we suggest that much of the confusion surrounding coevolution results from imprecise notions of superiority and progress. In particular, we note that in the literature, three distinct notions of progress are implicitly lumped together: local progress (superior performance against current opponents), historical progress (superior performance against previous opponents) and global progress (superior performance against the entire opponent space). As a result, valid conditions for one type of progress are unduly assumed to lead to another. In particular, the confusion between historical and global progress is a case of a common error, namely using the training set as a test set. This error is prevalent among standard methods for coevolutionary analysis (CIAO, Master Tournament, Dominance Tournament, etc.) By clearly defining and distinguishing between different types of progress, we identify limitations with existing techniques and algorithms, address them, and generally facilitate discussion and understanding of coevolution. We conclude that the concepts proposed in this paper correspond to important aspects of the coevolutionary process. 1
The Parallel Nash Memory for Asymmetric Games
, 2006
"... Coevolutionary algorithms search for test cases as part of the search process. The resulting adaptive evaluation function takes away the need to define a fixed evaluation function, but may also be unstable and thereby prevent reliable progress. Recent work in coevolution has therefore focused on alg ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Coevolutionary algorithms search for test cases as part of the search process. The resulting adaptive evaluation function takes away the need to define a fixed evaluation function, but may also be unstable and thereby prevent reliable progress. Recent work in coevolution has therefore focused on algorithms that guarantee progress with respect to a given solution concept. The Nash Memory archive guarantees monotonicity with respect to the gametheoretic solution concept of the Nash equilibrium, but is limited to symmetric games. We present an extension of the Nash Memory that guarantees monotonicity for asymmetric games. The Parallel Nash Memory is demonstrated in experiments, and its performance on general sum games is discussed.
Evolving smallboard Go players using coevolutionary temporal difference learning with archives
 International Journal of Applied Mathematics and Computer Science 21(4): 717–731, DOI
, 2011
"... We apply Coevolutionary Temporal Difference Learning (CTDL) to learn smallboard Go strategies represented as weighted piece counters. CTDL is a randomized learning technique which interweaves two search processes that operate in the intragame and intergame mode. Intragame learning is driven by g ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We apply Coevolutionary Temporal Difference Learning (CTDL) to learn smallboard Go strategies represented as weighted piece counters. CTDL is a randomized learning technique which interweaves two search processes that operate in the intragame and intergame mode. Intragame learning is driven by gradientdescent Temporal Difference Learning (TDL), a reinforcement learning method that updates the board evaluation function according to differences observed between its values for consecutively visited game states. For the intergame learning component, we provide a coevolutionary algorithm that maintains a sample of strategies and uses the outcomes of games played between them to iteratively modify the probability distribution, according to which new strategies are generated and added to the sample. We analyze CTDL’s sensitivity to all important parameters, including the trace decay constant that controls the lookahead horizon of TDL, and the relative intensity of intragame and intergame learning. We also investigate how the presence of memory (an archive) affects the search performance, and find out that the archived approach is superior to other techniques considered here and produces strategies that outperform a handcrafted weighted piece counter strategy and simple libertybased heuristics. This encouraging result can be potentially generalized not only to other strategy representations used for smallboard Go, but also to various games and a broader class of problems, because CTDL is generic and does not rely on any problemspecific knowledge.
The Road to Everywhere: Evolution, Complexity and Progress in Natural and Artificial Systems
"... We mowen nat, although we hadden it sworn, It overtake, it slit awey so faste. It wole us maken beggers atte laste! Evolution is notorious for its creative power, but also for giving rise to complex, unpredictable dynamics. As a result, practitioners of artificial evolution have encountered difficul ..."
Abstract
 Add to MetaCart
We mowen nat, although we hadden it sworn, It overtake, it slit awey so faste. It wole us maken beggers atte laste! Evolution is notorious for its creative power, but also for giving rise to complex, unpredictable dynamics. As a result, practitioners of artificial evolution have encountered difficulties in predicting, analysing, or even understanding the outcome of their experiments. In particular, the concept of evolutionary “progress ” (whether in the sense of performance increase or complexity growth) has given rise to much debate and confusion. After a careful description of the mechanisms of evolution and natural selection, we provide usable concepts of performance and progress in coevolution. In particular, we introduce a distinction between three types of progress: local, historical, and global, which we suggest underlies much of the confusion that surrounds coevolutionary dynamics. Similarly, we provide a comprehensive answer to the question of whether an “arrow of complexity ” exists in evolution. We introduce several methods to detect and analyse performance and progress in coevolutionary experiments. We propose a statistical
Learning Control for Xpilot Agents in the Core
"... Abstract — Xpilot, a network game where agents engage in space combat, has been shown to be a good test bed for controller learning systems. In this paper, we introduce the Core, an Xpilot learning environment where a population of learning agents interact locally through tournament selection, cross ..."
Abstract
 Add to MetaCart
Abstract — Xpilot, a network game where agents engage in space combat, has been shown to be a good test bed for controller learning systems. In this paper, we introduce the Core, an Xpilot learning environment where a population of learning agents interact locally through tournament selection, crossover, and mutation to produce offspring in the evolution of controllers. The system does not require the researcher to develop a fitness function or suitable agents to engage with the evolving agent. Instead, it employs a form of coevolution where the environment, made up of the population of agents, evolves to continually challenge individual agents evolving within it. Tests show its successful use in evolving controllers for combat agents in Xpilot. L I.