Results 1  10
of
23
Biogeographybased optimization
 IEEE Transactions on Evolutionary Computation
, 2008
"... Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new o ..."
Abstract

Cited by 46 (15 self)
 Add to MetaCart
Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new opposition method named quasireflection is introduced. Quasireflection is based on opposite numbers theory and we mathematically prove that it has the highest expected probability of being closer to the problem solution among all OBL methods. The oppositional algorithm is further revised by the addition of dynamic domain scaling and weighted reflection. Simulations have been performed to validate the performance of quasiopposition as well as a mathematical analysis for a singledimensional problem. Empirical results demonstrate that with the assistance of quasireflection, OB O significantly outperforms BBO in terms of success rate and the number of fitness function evaluations required to find an optimal solution. Index Terms—Biogeographybased optimization (BBO), evolutionary algorithms, oppositionbased learning, opposite numbers, quasiopposite numbers, quasireflected numbers, probability. I.
OnLine Learning Processes in Artificial Neural Networks
, 1993
"... We study online learning processes in artificial neural networks from a general point of view. Online learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuoustime master equation. O ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
We study online learning processes in artificial neural networks from a general point of view. Online learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuoustime master equation. Online learning is necessary if not all training patterns are available all the time. This occurs in many applications when the training patterns are drawn from a timedependent environmental distribution. Studying learning in a changing environment, we encounter a conflict between the adaptability and the confidence of the network's representation. Minimization of a criterion incorporating both effects yields an algorithm for online adaptation of the learning parameter. The inherent noise of online learning makes it possible to escape from undesired local minima of the error potential on which the learning rule performs (stochastic) gradient descent. We try to quantify these often made cl...
Simulated Annealing Algorithms For Continuous Global Optimization
, 2000
"... INTRODUCTION In this paper we consider Simulated Annealing algorithms (SA in what follows) applied to continuous global optimization problems, i.e. problems with the following form f = min x2X f(x); (1.1) where X ` ! n is a continuous domain, often assumed to be compact, which, combined with ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
INTRODUCTION In this paper we consider Simulated Annealing algorithms (SA in what follows) applied to continuous global optimization problems, i.e. problems with the following form f = min x2X f(x); (1.1) where X ` ! n is a continuous domain, often assumed to be compact, which, combined with the continuity or lower semicontinuity of f , guarantees the existence of the minimum value f . SA algorithms are based on an analogy with a physical phenomenon: while at high temperatures the molecules in a liquid move freely, if the temperature is slowly decreased the thermal mobility of the molecules is lost and they form a pure crystal which also corresponds to a state of minimum energy. If the temperature is decreased too quickly (the so called quenching) a liquid metal rather ends up in a polycrystalline or amorphous state with
A Theory for Learning by Weight Flow on StiefelGrassman Manifold
 Neural Computation
, 2001
"... Recently we introduced the concept of neural networks learning on StiefelGrassman manifold for MLPlike networks. Contributions of other authors have also appeared in the scientific literature about this topic. The aim of this paper is to present a general theory for it, and to illustrate how e ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
Recently we introduced the concept of neural networks learning on StiefelGrassman manifold for MLPlike networks. Contributions of other authors have also appeared in the scientific literature about this topic. The aim of this paper is to present a general theory for it, and to illustrate how existing theories may be explained within the general framework proposed here. 1 1 Introduction In a multilayerperceptronlike network formed by the interconnection of basic neurons, whose only adjustable part consists of weightvectors, learning the optimal set of connection patterns may be interpreted as selecting the best directions among all possible ones in the space that the weightvectors belong to (Fyfe, 1995). This interpretation is very useful, in that if a learning error criterion is defined over the weightspace, it measures how much interesting directions are, so that ultimately the rule with which network learns may be conceived as a searching procedure allowing to find out...
Massively Parallel Simulated Annealing and its Relation to Evolutionary Algorithms
 EVOLUTIONARY COMPUTATION
, 1994
"... Simulated annealing and and single trial versions of evolution strategies possess a close relationship when they are designed for optimization over continuous variables. Analytical investigations of their differences and similarities lead to a crossfertilization of both approaches, resulting in new ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Simulated annealing and and single trial versions of evolution strategies possess a close relationship when they are designed for optimization over continuous variables. Analytical investigations of their differences and similarities lead to a crossfertilization of both approaches, resulting in new theoretical results, new parallel population based algorithms, and a better understanding of the interrelationships.
TraceBased Methods for Solving Nonlinear Global Optimization and Satisfiability Problems
 J. of Global Optimization
, 1996
"... . In this paper we present a method called NOVEL (Nonlinear Optimization via External Lead) for solving continuous and discrete global optimization problems. NOVEL addresses the balance between global search and local search, using a trace to aid in identifying promising regions before committing to ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
. In this paper we present a method called NOVEL (Nonlinear Optimization via External Lead) for solving continuous and discrete global optimization problems. NOVEL addresses the balance between global search and local search, using a trace to aid in identifying promising regions before committing to local searches. We discuss NOVEL for solving continuous constrained optimization problems and show how it can be extended to solve constrained satisfaction and discrete satisfiability problems. We first transform the problem using Lagrange multipliers into an unconstrained version. Since a stable solution in a Lagrangian formulation only guarantees a local optimum satisfying the constraints, we propose a global search phase in which an aperiodic and bounded trace function is added to the search to first identify promising regions for local search. The trace generates an informationbearing trajectory from which good starting points are identified for further local searches. Taking only a sm...
A Theory for Learning Based on Rigid Bodies Dynamics
, 2002
"... A new learning theory derived from the study of the dynamics of an abstract system of masses, moving in a multidimensional space under an external force field, is presented. The set of equations describing system's dynamics may be directly interpreted as a learning algorithm for neural layers. Relev ..."
Abstract

Cited by 12 (11 self)
 Add to MetaCart
A new learning theory derived from the study of the dynamics of an abstract system of masses, moving in a multidimensional space under an external force field, is presented. The set of equations describing system's dynamics may be directly interpreted as a learning algorithm for neural layers. Relevant properties of the proposed learning theory are discussed within the paper, along with results of computer simulations performed in order to assess its effectiveness in applied fields.
A probabilistic analysis of a simplified biogeographybased optimization algorithm, Evolutionary Computation, in print, available at http://embeddedlab.csuohio.edu/BBO
"... Biogeographybased optimization (BBO) is a populationbased evolutionary algorithm (EA) that is based on the mathematics of biogeography. Biogeography is the study of the geographical distribution of biological organisms. We present a simplified version of BBO and perform an approximate analysis of ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Biogeographybased optimization (BBO) is a populationbased evolutionary algorithm (EA) that is based on the mathematics of biogeography. Biogeography is the study of the geographical distribution of biological organisms. We present a simplified version of BBO and perform an approximate analysis of the BBO population using probability theory. Our analysis provides approximate values for the expected number of generations before the population’s best solution improves, and the expected amount of improvement. These expected values are functions of the population size. We quantify three behaviors as the population size increases: first, we see that the best solution in the initial randomly generated population improves; second, we see that the expected number of generations before improvement increases; and third, we see that the expected amount of improvement decreases.
Unsupervised Neural Learning On Lie Group
, 2002
"... This paper aims at presenting general results about a new class of learning rules for linear as well nonlinear neural layers, which allows the weightmatrix, describing the connectionstrengths between the inputs and the neurons, to learn in unsupervised frameworks under the constraints of orthonorm ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
This paper aims at presenting general results about a new class of learning rules for linear as well nonlinear neural layers, which allows the weightmatrix, describing the connectionstrengths between the inputs and the neurons, to learn in unsupervised frameworks under the constraints of orthonormality, namely, when the network parameters can be arranged in vectors of constant lengths and orthogonal to each other. This paper follows our preceding work, devoted to the first analysis of learning rules on Stiefel Grassman manifold and on a wide bibliographical investigation in order to show the close relationships among existing contributions, and Ref. 23, devoted to a wide numerical comparison of orthonormal neural signal processing techniques in the principal/independent component analysis field. The present paper answers to the necessity of a more general treatment of the learning theories with orthonormal constraints and of a more detailed investigation of specific examples, from which useful hints on the general applicability of the proposed theory emerge
Optimal Anytime Search For Constrained Nonlinear Programming
, 2001
"... In this thesis, we study optimal anytime stochastic search algorithms (SSAs) for solving general constrained nonlinear programming problems (NLPs) in discrete, continuous and mixedinteger space. The algorithms are general in the sense that they do not assume di#erentiability or convexity of functio ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this thesis, we study optimal anytime stochastic search algorithms (SSAs) for solving general constrained nonlinear programming problems (NLPs) in discrete, continuous and mixedinteger space. The algorithms are general in the sense that they do not assume di#erentiability or convexity of functions. Based on the search algorithms, we develop the theory of SSAs and propose optimal SSAs with iterative deepening in order to minimize their expected search time. Based on the optimal SSAs, we then develop optimal anytime SSAs that generate improved solutions as more search time is allowed. Our SSAs