Results 1  10
of
85
Evolutionary computation in structural design
 Journal of Engineering with Computers
, 2001
"... Evolutionary computation is emerging as a new engineering computational paradigm, which may significantly change the present structural design practice. For this reason, an extensive study of evolutionary computation in the context of structural design has been conducted in the Information Technolog ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
(Show Context)
Evolutionary computation is emerging as a new engineering computational paradigm, which may significantly change the present structural design practice. For this reason, an extensive study of evolutionary computation in the context of structural design has been conducted in the Information Technology and Engineering School at George Mason University and its results are reported here. First, a general introduction to evolutionary computation is presented and recent developments in this field are briefly described. Next, the field of evolutionary design is introduced and its relevance to structural design is explained. Further, the issue of creativity/novelty is discussed and possible ways of achieving it during a structural design process are suggested. Current research progress in building engineering systems ’ representations, one of the key issues in evolutionary design, is subsequently discussed. Next, recent developments in constrainthandling methods in evolutionary optimization are reported. Further, the rapidly growing field of evolutionary multiobjective optimization is presented and briefly described. An emerging subfield of coevolutionary design is subsequently introduced and its current advancements reported. Next, a comprehensive review of the applications of evolutionary computation in structural design is provided and chronologically classified. Finally, a summary of the current research status and a discussion on the most promising paths of future research are also presented.
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
"... Many complex control problems require sophisticated solutions that are not amenable to traditional controller design. Not only is it difficult to model real world systems, but often it is unclear what kind of behavior is required to solve the task. Reinforcement learning (RL) approaches have made pr ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
(Show Context)
Many complex control problems require sophisticated solutions that are not amenable to traditional controller design. Not only is it difficult to model real world systems, but often it is unclear what kind of behavior is required to solve the task. Reinforcement learning (RL) approaches have made progress by using direct interaction with the task environment, but have so far not scaled well to large state spaces and environments that are not fully observable. In recent years, neuroevolution, the artificial evolution of neural networks, has had remarkable success in tasks that exhibit these two properties. In this paper, we compare a neuroevolution method called Cooperative Synapse Neuroevolution (CoSyNE), that uses cooperative coevolution at the level of individual synaptic weights, to a broad range of reinforcement learning algorithms on very difficult versions of the pole balancing problem that involve large (continuous) state spaces and hidden state. CoSyNE is shown to be significantly more efficient and powerful than the other methods on these tasks.
Improving coevolutionary search for optimal multiagent behaviors
 In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI
, 2003
"... Evolutionary computation is a useful technique for learning behaviors in multiagent systems. Among the several types of evolutionary computation, one natural and popular method is to coevolve multiagent behaviors in multiple, cooperating populations. Recent research has suggested that coevolutionary ..."
Abstract

Cited by 34 (12 self)
 Add to MetaCart
(Show Context)
Evolutionary computation is a useful technique for learning behaviors in multiagent systems. Among the several types of evolutionary computation, one natural and popular method is to coevolve multiagent behaviors in multiple, cooperating populations. Recent research has suggested that coevolutionary systems may favor stability rather than performance in some domains. In order to improve upon existing methods, this paper examines the idea of modifying traditional coevolution, biasing it to search for maximal rewards. We introduce a theoretical justification of the improved method and present experiments in three problem domains. We conclude that biasing can help coevolution find better results in some multiagent problem domains. 1
The MaxSolve algorithm for coevolution
 In Beyer, H.G. (Ed.), Proceedings of the Genetic and Evolutionary Computation Conference, GECCO05
, 2005
"... Coevolution can be used to adaptively choose the tests used for evaluating candidate solutions. A longstanding question is how this dynamic setup may be organized to yield reliable search methods. Reliability can only be considered in connection with a particular solution concept specifying what co ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
(Show Context)
Coevolution can be used to adaptively choose the tests used for evaluating candidate solutions. A longstanding question is how this dynamic setup may be organized to yield reliable search methods. Reliability can only be considered in connection with a particular solution concept specifying what constitutes a solution. Recently, monotonic coevolution algorithms have been proposed for several solution concepts. Here, we introduce a new algorithm that guarantees monotonicity for the solution concept of maximizing the expected utility of a candidate solution. The method, called MaxSolve, is compared to the IPCA algorithm and found to perform more efficiently for a range of parameter values on an abstract test problem.
Coevolution of rolebased cooperation in multiagent systems
, 2007
"... In certain tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, an ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
In certain tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called MultiAgent ESP (Enforced SubPopulations), is proposed and demonstrated in a preycapture task. First, the approach is shown more efficient than evolving a single central controller for all agents. Second, cooperation is found to be most efficient through stigmergy, i.e. through rolebased responses to the environment, rather than direct communication between the agents. Together these results suggest that rolebased cooperation is an effective strategy in certain multiagent domains. [ This paper is a revision of AI01287.]
Theoretical advantages of lenient learners: An evolutionary game theoretic perspective
 Journal of Machine Learning Research
"... This paper presents the dynamics of multiple learning agents from an evolutionary game theoretic perspective. We provide replicator dynamics models for cooperative coevolutionary algorithms and for traditional multiagent Qlearning, and we extend these differential equations to account for lenient l ..."
Abstract

Cited by 24 (12 self)
 Add to MetaCart
(Show Context)
This paper presents the dynamics of multiple learning agents from an evolutionary game theoretic perspective. We provide replicator dynamics models for cooperative coevolutionary algorithms and for traditional multiagent Qlearning, and we extend these differential equations to account for lenient learners: agents that forgive possible mismatched teammate actions that resulted in low rewards. We use these extended formal models to study the convergence guarantees for these algorithms, and also to visualize the basins of attraction to optimal and suboptimal solutions in two benchmark coordination problems. The paper demonstrates that lenience provides learners with more accurate information about the benefits of performing their actions, resulting in higher likelihood of convergence to the globally optimal solution. In addition, the analysis indicates that the choice of learning algorithm has an insignificant impact on the overall performance of multiagent learning algorithms; rather, the performance of these algorithms depends primarily on the level of lenience that the agents exhibit to one another. Finally, the research herein supports the strength and generality of evolutionary game theory as a backbone for multiagent learning.
Understanding cooperative coevolutionary dynamics via simple fitness landscapes
 In Proceedings of the Genetic and Evolutionary Computation Conference
, 2005
"... Cooperative coevolution is often used to solve difficult optimization problems by means of problem decomposition. Its performance for such tasks can vary widely from good to disappointing. One of the reasons for this is that attempts to improve coevolutionary performance using traditional EC analy ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Cooperative coevolution is often used to solve difficult optimization problems by means of problem decomposition. Its performance for such tasks can vary widely from good to disappointing. One of the reasons for this is that attempts to improve coevolutionary performance using traditional EC analysis techniques often fail to provide the necessary insights into the dynamics of coevolutionary systems, a key factor affecting performance. In this paper we use two simple fitness landscapes to illustrate the importance of taking a dynamical systems approach to analyzing coevolutionary algorithms in order to understand them better and to improve their problem solving performance.
A sensitivity analysis of a cooperative coevolutionary algorithm biased for optimization
, 2004
"... Recent theoretical work helped explain certain optimizationrelated pathologies in cooperative coevolutionary algorithms (CCEAs). Such explanations have led to adopting specific and constructive strategies for improving CCEA optimization performance by biasing the algorithm toward ideal collaboratio ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
Recent theoretical work helped explain certain optimizationrelated pathologies in cooperative coevolutionary algorithms (CCEAs). Such explanations have led to adopting specific and constructive strategies for improving CCEA optimization performance by biasing the algorithm toward ideal collaboration. This paper investigates how sensitivity to the degree of bias (set in advance) is affected by certain algorithmic and problem properties. We discover that the previous static biasing approach is quite sensitive to a number of problem properties, and we propose a stochastic alternative which alleviates this problem. We believe that finding appropriate biasing rates is more feasible with this new biasing technique.
A visual demonstration of convergence properties of cooperative coevolution
 IN PARALLEL PROBLEM SOLVING FROM NATURE – PPSN2004
, 2004
"... We introduce a model for cooperative coevolutionary algorithms (CCEAs) using partial mixing, which allows us to compute the expected longrun convergence of such algorithms when individuals ’ fitness is based on the maximum payoff of some N evaluations with partners chosen at random from the other ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
We introduce a model for cooperative coevolutionary algorithms (CCEAs) using partial mixing, which allows us to compute the expected longrun convergence of such algorithms when individuals ’ fitness is based on the maximum payoff of some N evaluations with partners chosen at random from the other population. Using this model, we devise novel visualization mechanisms to attempt to qualitatively explain a difficulttoconceptualize pathology in CCEAs: the tendency for them to converge to suboptimal Nash equilibria. We further demonstrate visually how increasing the size of N, or biasing the fitness to include an idealcollaboration factor, both improve the likelihood of optimal convergence, and under which initial population configurations they are not much help.