## SWAF: Swarm Algorithm Framework for Numerical Optimization (2004)

Venue: | EDS.), GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE. SPRINGER-VERLAG |

Citations: | 5 - 1 self |

### BibTeX

@INPROCEEDINGS{Xie04swaf:swarm,

author = {Xiao-feng Xie and Wen-jun Zhang},

title = {SWAF: Swarm Algorithm Framework for Numerical Optimization},

booktitle = {EDS.), GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE. SPRINGER-VERLAG},

year = {2004},

pages = {238--250},

publisher = {}

}

### OpenURL

### Abstract

A swarm algorithm framework (SWAF), realized by agent-based modeling, is presented to solve numerical optimization problems. Each agent is a bare bones cognitive architecture, which learns knowledge by appropriately deploying a set of simple rules in fast and frugal heuristics. Two essential categories of rules, the generate-and-test and the problem-formulation rules, are implemented, and both of the macro rules by simple combination and subsymbolic deploying of multiple rules among them are also studied. Experimental results on benchmark problems are presented, and performance comparison between SWAF and other existing algorithms indicates that it is efficiently.

### Citations

3933 | Optimization by simulated annealing - Kirkpatrick, Gelatt, et al. - 1983 |

3180 | Genetic Programming: On the Programming of Computers by Means of Natural Selection - Koza - 1992 |

3037 | H.: Adaption in Natural and Artificial Systems - Holland - 1975 |

1945 | Particle Swarm Optimization - Kennedy, Eberhart - 1995 |

1088 |
Social learning theory
- Bandura
- 1977
(Show Context)
Citation Context ... : = p , and DR = U� (1, D) , where U�( zl, zu) is a random integer value within [ z l , z u ]. For the dth dimension [ 32, 37]: ( t+ 1) ( t) ( t) IF ( U� () < CR OR d == DR) THEN xd = gd + SF⋅∆NV, d =-=(5)-=- where 0 ≤CR ≤ 1 , DR ensures the variation at least in one dimension, 0 < SF < 1.2 . � � � () t NV () t () t � () t � () t ∆ N = ∑ V 1 ∆1, where each difference vector ∆ 1 = pU (1, N) − p � U�(1, N) ... |

936 | The Ant System: Optimization by a colony of cooperating agents - Dorigo, Maniezzo, et al. - 1996 |

862 | Soar: An architecture for general intelligence - Laird, Newell, et al. - 1987 |

780 | The knowledge level - Newell - 1982 |

708 | No free lunch theorems for optimization - Wolpert, Macready - 1997 |

527 | The particle swarm - explosion, stability, and convergence in a multidimensional complex space - Clerc, Kennedy - 2002 |

458 | Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces - Storn, Price - 1997 |

245 | Evolutionary Algorithms for Constrained Engineering Problems - Michalewicz, Dasgupta - 1996 |

192 |
Evolution strategies: A comprehensive introduction. Natural Computing 1
- Beyer, Schwefel
(Show Context)
Citation Context ... are handled by Periodic mode [ 37]. Each point x∉S is not adjusted to S. However, F( x � )=F( z � � ), where z ∈ S is the mapping point of x � : zd ud ( ld xd)% sd IF xd ld M� ⎧ = − − < P( xd → zd): =-=(6)-=- ⎨ ⎩zd = ld + ( xd − ud)% sd IF xd > ud where ‘%’ is the modulus operator, sd=|ud-ld| is the parameter range of the dth dimen�* ( T ) * sion. The ultimate solution point g ∈ S is available by M� � � (... |

187 |
Agent-based modeling: Methods and techniques for simulating human systems
- Bonabeau
- 2002
(Show Context)
Citation Context ...hen P( SI →SO) ≥ P( SI → SO) . ' Of course, S O is not always equal to S O . However, the searching path can be built ' ' by decreasing ε R so as to increasing ( SO ∩ SO) SO . When ε R = 0 , SO = SO. =-=(7)-=-s() t The adjusting of ε R is referring to a set of points in IG that are updated frequently, � () t which is P = { pi|1 ≤i ≤ N, i∈� } for both DE and PS rule. Then in P, the number � () t () t of ele... |

156 | Animal intelligence: n experimental study of the associative process in animals - Thorndike - 1898 |

137 | 2000, ‘An efficient constraint handling method for genetic algorithms - Deb |

133 | Invariants of human behavior - Simon - 1990 |

125 | Stochastic Ranking for Constrained Evolutionary Optimization - Runarsson, Yao - 2000 |

89 | Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions: A survey of some theoretical and practical aspects of genetic algorithms
- Salomon
- 1996
(Show Context)
Citation Context ...nd 0.9. For {R F}, the PBH and the BCH rule are employed. 100 runs were done for each function. The results for algorithms in SWAF were compared with those for two previously published algorithms: a) =-=(30, 200)-=--evolution strategy (ES) [ 29], T=1750, then TE=3.5E5; and b) genetic algorithm (GA) [ 17], which N=70, T=2E4, then TE=1.4E6. Table 1. Mean results by different algorithms for problems with inequality... |

84 |
ACT: a simple theory of complex cognition
- ANDERSON
- 1996
(Show Context)
Citation Context ...ol of learning. Here we use two essential categories, which include generateand-test rules {RGT} and problem-formulation rules {RF}, solving problem as follows: { RF } { RGT } Problem ⎯⎯⎯→F ⎯⎯⎯→x∈S � =-=(2)-=- where each RF forms the landscape F, and each RGT generates the points in SO. Declarative memory (MD) stores factual knowledge, such as knowledge points, which is divided into private and public know... |

83 |
The origin and evolution of cultures
- Boyd, Richerson
- 2005
(Show Context)
Citation Context ... 29], and to follow criteria by Deb [ 13], the BCH rule for goodness evaluation is realized by comparing any two points xA � , xB � : � � � � ⎧FCON ( xA) < FCON ( xB) OR F( xA) ≤ F( xB), IF ⎨ � � � � =-=(8)-=- ⎩ FCON ( xA) = FCON( xB) AND FOBJ( xA) ≤ FOBJ( xB) 4.3 Adaptive Constraints Relaxing (ACR) Rule The searching path of BCH rule is SI → SF → SO. For discussion, the probability for changing g � from s... |

76 | Learning at the knowledge level - Dietterich - 1986 |

75 | Simple heuristics that makes us smart - Todd, Gigerenzer - 2000 |

40 |
A Theory of the Origins of Human Knowledge
- ANDERSON
- 1989
(Show Context)
Citation Context ...nd performance comparison between SWAF and other existing algorithms indicates that it is efficiently. 1 Introduction The general numerical optimization problems can be defined as: Minimize: F ( x) � =-=(1)-=- where x � D = ( x1,..., xd,..., xD) ∈ S ⊆ � ( 1 ≤ d ≤ D , d ∈ � ), and xd ∈ [ l d , d u ], l d and u d are lower and upper values respectively. F( x) � is the objective function. S is a D* dimensiona... |

40 | Social learning in animals: Categories and mechanisms - Heyes - 1994 |

39 | DEPSO: Hybrid Particle Swarm with Differential Evolution Operator - Zhang, Xie - 2003 |

35 | Taboo Search: An Approach to the Multiple Minima Problem, Science 267 - Cvijovic, Klinowski - 1995 |

31 | Global optimization and simulated annealing - Dekkers, Aarts - 1991 |

17 | Self-adaptive fitness formulation for constrained optimization - Farmani, Wright - 2003 |

16 | Culture and the evolution of social learning - Flinn - 1997 |

13 | Why behaviour patterns that animals learn socially are locally adaptive - Galef - 1995 |

12 |
Schwefel H-P. Evolutionary computation: comments on the history and current state
- Back, Hammel
- 1997
(Show Context)
Citation Context ...d-test rule (RGT) is the combination of a generate rule (RG) and a test rule (RT) [ 15], which is a process for acquiring declarative memory: () t R � G ( t+ 1) RT ( t+ 1) < M , I > ⎯⎯→x ⎯⎯→< M , I > =-=(3)-=- D G D G Here we only discuss the {RGT} matching to the sharing information, although the {R GT} can be extract from some single starting point algorithms that without I, such pure random search (PRS)... |

12 | Learning from mistakes
- Chialvo, Bak
- 1999
(Show Context)
Citation Context ... odd and even t, respectively. Another simple mode is the random combination (RC) of rules, which deploying each rule with specified probability at random. 5.2 Subsymbolic Deploying by Neural Network =-=(10)-=- To deploying rules adaptively, the neural network [ 4] instead of Bayesian inference [ 2] is applied since no enough knowledge for the rules available. Considering a network with NI input, NJ middle ... |

11 |
Adaptive learning by extremal dynamics and negative feedback,” Phys
- Bak, Chialvo
- 2001
(Show Context)
Citation Context ...that the algorithms in SWAF were often performed better than GA and ES, especially for the combined DEPS rule. Moreover, for G2, the results of DEPS (CR=0.1) was 0.7951, which was also better than GA =-=[4]-=-, when T was increased to 5000 (i.e. TE was increased to 3.5E5). Table 3 summaries the mean results by GA [ 17], ES [ 29], and algorithms in SWAF for the rest three examples with equality constraints ... |

6 | Visions of rationality
- Chase
- 1998
(Show Context)
Citation Context ...' threshold value, and the corresponding quasi solution space is defined as S O , then an additional rule is applied on equation (8) in advance for relaxing constraints: � � F ( x) = max( ε , F ( x)) =-=(9)-=- CON R CON ' ' ' It has SF⊆SFafter the relaxing, and the searching path becomes SI →SF → SO. ' ' Compared with P( SF→ SO) , P( SF→ SO) can be increased dramatically due to the ' ' enlarged improvement... |

4 | Multiagent diffusion and distributed optimization - Tsui, Liu - 2003 |