Results 1 - 10
of
852
Stability analysis of swarms
- IEEE Transactions on Automatic Control
, 2003
"... Abstract — In this brief article we specify an “individual-based ” continuous time model for swarm aggregation in n-dimensional space and study its stability properties. We show that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time. Moreover, we ..."
Abstract
-
Cited by 197 (9 self)
- Add to MetaCart
Abstract — In this brief article we specify an “individual-based ” continuous time model for swarm aggregation in n-dimensional space and study its stability properties. We show that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time. Moreover, we obtain an explicit bound on the swarm size, which depends only on the parameters of the swarm model. I.
Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients
- IEEE Transactions on Evolutionary Computation
, 2004
"... Abstract—This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, tim ..."
Abstract
-
Cited by 194 (2 self)
- Add to MetaCart
(Show Context)
Abstract—This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of “mutation ” is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept “self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC). ” Under this method, only the “social ” part and the “cognitive ” part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two. Index Terms—Acceleration coefficients, hierarchical particle swarm, mutation, particle swarm, reinitialization. I.
The fully informed particle swarm: Simpler, maybe better
- IEEE Transactions on Evolutionary Computation
, 2004
"... The canonical particle swarm algorithm is a new approach to optimization, drawing inspiration from group behavior and the establishment of social norms. It is gaining popularity, especially because of the speed of convergence and the fact it is easy to use. However, we feel that each individual is n ..."
Abstract
-
Cited by 128 (5 self)
- Add to MetaCart
(Show Context)
The canonical particle swarm algorithm is a new approach to optimization, drawing inspiration from group behavior and the establishment of social norms. It is gaining popularity, especially because of the speed of convergence and the fact it is easy to use. However, we feel that each individual is not simply influenced by the best performer among his neighbors. We thus decided to make the individuals “fully informed. ” The results are very promising, as informed individuals seem to find better solutions in all the benchmark functions.
Stability analysis of social foraging swarms
- IEEE TRANS. ON SYSTEMS, MAN AND CYBERNETICS
, 2004
"... In this article we specify an-member “individual-based” continuous time swarm model with individuals that move in an-dimensional space according to an attractant/repellent or a nutrient profile. The motion of each individual is determined by three factors: i) attraction to the other individuals on ..."
Abstract
-
Cited by 100 (4 self)
- Add to MetaCart
(Show Context)
In this article we specify an-member “individual-based” continuous time swarm model with individuals that move in an-dimensional space according to an attractant/repellent or a nutrient profile. The motion of each individual is determined by three factors: i) attraction to the other individuals on long distances; ii) repulsion from the other individuals on short distances; and iii) attraction to the more favorable regions (or repulsion from the unfavorable regions) of the attractant/repellent profile. The emergent behavior of the swarm motion is the result of a balance between inter-individual interactions and the simultaneous interactions of the swarm members with their environment. We study the stability properties of the collective behavior of the swarm for different profiles and provide conditions for collective convergence to more favorable regions of the profile.
Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems
, 2008
"... Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the s ..."
Abstract
-
Cited by 90 (12 self)
- Add to MetaCart
Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.
Particle swarm optimization -- An Overview
- SWARM INTELL
, 2007
"... Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algo ..."
Abstract
-
Cited by 84 (0 self)
- Add to MetaCart
Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors’ perspective, including variations in the algorithm, current and ongoing research, applications and open problems.
A hybrid of genetic algorithm and particle swarm optimization for recurrent network design
- IEEE Transactions on Systems, Man and Cybernetics, Part B
"... Abstract—An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO ..."
Abstract
-
Cited by 79 (2 self)
- Add to MetaCart
(Show Context)
Abstract—An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO. In HGAPSO, individuals in a new generation are created, not only by crossover and mutation operation as in GA, but also by PSO. The concept of elite strategy is adopted in HGAPSO, where the upper-half of the best-performing individuals in a population are regarded as elites. However, instead of being reproduced directly to the next generation, these elites are first enhanced. The group constituted by the elites is regarded as a swarm, and each elite corresponds to a particle within it. In this regard, the elites are enhanced by PSO, an operation which mimics the maturing phenomenon in nature. These enhanced elites constitute half of the population in the new generation, whereas the other half is generated by performing crossover and mutation operation on these enhanced elites. HGAPSO is applied to recurrent neural/fuzzy network design as follows. For recurrent neural network, a fully connected recurrent neural network is designed and applied to a temporal sequence production problem. For recurrent fuzzy network design, a Takagi–Sugeno–Kang-type recurrent fuzzy network is designed and applied to dynamic plant control. The performance of HGAPSO is compared to both GA and PSO in these recurrent networks design problems, demonstrating its superiority. Index Terms—Dynamic plant control, elite strategy, recurrent neural/fuzzy work, temporal sequence production. I.
On the computation of all global minimizers through particle swarm optimization
- IEEE Transactions on Evolutionary Computation
, 2004
"... Abstract—This paper presents approaches for effectively computing all global minimizers of an objective function. The approaches include transformations of the objective function through the recently proposed deflection and stretching techniques, as well as a repulsion source at each detected minimi ..."
Abstract
-
Cited by 79 (18 self)
- Add to MetaCart
(Show Context)
Abstract—This paper presents approaches for effectively computing all global minimizers of an objective function. The approaches include transformations of the objective function through the recently proposed deflection and stretching techniques, as well as a repulsion source at each detected minimizer. The aforementioned techniques are incorporated in the context of the particle swarm optimization (PSO) method, resulting in an efficient algorithm which has the ability to avoid previously detected solutions and, thus, detect all global minimizers of a function. Experimental results on benchmark problems originating from the fields of global optimization, dynamical systems, and game theory, are reported, and conclusions are derived. Index Terms—Deflection technique, detecting all minimizers, dynamical systems, Nash equilibria, particle swarm optimization (PSO), periodic orbits, stretching technique. I.