Results 1  10
of
10
Evolutionary Optimization of Neural Networks for Face Detection
, 2004
"... For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a neural network, e.g., in the ViisageFaceFINDER video surveillance system. We describe the optimization of such a net ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a neural network, e.g., in the ViisageFaceFINDER video surveillance system. We describe the optimization of such a network by a hybrid algorithm combining evolutionary computation and gradientbased learning. The evolved solutions perform considerably faster than an expertdesigned architecture without loss of accuracy.
Neutrality and selfadaptation
 Natural Computing
, 2003
"... Abstract. Neutral genotypephenotype mappings can be observed in natural evolution and are often used in evolutionary computation. In this article, important aspects of such encodings are analyzed. First, it is shown that in the absence of external control neutrality allows a variation of the search ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. Neutral genotypephenotype mappings can be observed in natural evolution and are often used in evolutionary computation. In this article, important aspects of such encodings are analyzed. First, it is shown that in the absence of external control neutrality allows a variation of the search distribution independent of phenotypic changes. In particular, neutrality is necessary for selfadaptation, which is used in a variety of algorithms from all main paradigms of evolutionary computation to increase efficiency. Second, the average number of fitness evaluations needed to find a desirable (e.g., optimally adapted) genotype depending on the number of desirable genotypes and the cardinality of the genotype space is derived. It turns out that this number increases only marginally when neutrality is added to an encoding presuming that the fraction of desirable genotypes stays constant and that the number of these genotypes is not too small.
Evolutionary multiobjective optimization of neural networks for face detection
 INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS
, 2004
"... For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a feedforward neural network (NN), e.g., in the ViisageFaceFINDER video surveillance system. We describe the optimiza ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a feedforward neural network (NN), e.g., in the ViisageFaceFINDER video surveillance system. We describe the optimization of such a NN by a hybrid algorithm combining evolutionary multiobjective optimization (EMO) and gradientbased learning. The evolved solutions perform considerably faster than an expertdesigned architecture without loss of accuracy. We compare an EMO and a single objective approach, both with online search strategy adaptation. It turns out that EMO is preferable to the single objective approach in several respects.
Voronoibased estimation of distribution algorithm for multiobjective optimization
 in Proceedings of the Congress on Evolutionary Computation
, 2004
"... Abstract — The distribution of the Paretooptimal solutions often has a clear structure. To adapt evolutionary algorithms to the structure of a multiobjective optimization problem, either an adaptive representation or adaptive genetic operators should be employed. In this paper, we suggest an estim ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract — The distribution of the Paretooptimal solutions often has a clear structure. To adapt evolutionary algorithms to the structure of a multiobjective optimization problem, either an adaptive representation or adaptive genetic operators should be employed. In this paper, we suggest an estimation of distribution algorithm for solving multiobjective optimization, which is able to adjust its reproduction process to the problem structure. For this purpose, a new algorithm called Voronoibased Estimation of Distribution Algorithm (VEDA) is proposed. In VEDA, a Voronoi diagram is used to construct stochastic models, based on which new offspring will be generated. Empirical comparisons of the VEDA with other estimation of distribution algorithms (EDAs) and the popular NSGAII algorithm are carried out. In addition, representation of Paretooptimal solutions using a mathematical model rather than a solution set is also discussed. I.
Operator Adaptation in Structure Optimization of Neural Networks
 In
, 2001
"... The adaptation of operator probabilities is based on a function q that measures the value of a single modification by an operator. One possible choice for this measure is the local delta or credit given by q (g) := max{# # g #  # # g best # , 0 # , where g best is the best individual in ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The adaptation of operator probabilities is based on a function q that measures the value of a single modification by an operator. One possible choice for this measure is the local delta or credit given by q (g) := max{# # g #  # # g best # , 0 # , where g best is the best individual in the current population. Replacing #(g best ) with the fitness of the parent of g yields an alternative measure called benefit. The generation dependent quality of o ## is defined as q (t) o := 1/ # # O (t) o # # # g#O (t) o q
Synergies between evolutionary and neural computation
 13th European Symposium on Artificial Neural Networks (ESANN 2005
, 2005
"... Abstract. Evolutionary and neural computation share the same philosophy to use biological information processing for the solution of technical problems. Besides this important but rather abstract common foundation, there have also been many successful combinations of both methods for solving problem ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Evolutionary and neural computation share the same philosophy to use biological information processing for the solution of technical problems. Besides this important but rather abstract common foundation, there have also been many successful combinations of both methods for solving problems as applied as the design of turbomachinery components. In this paper, we will introduce evolutionary algorithms primarily for a “neural ” audience and demonstrate their usefulness for neural computation. Furthermore, we will introduce a list of some more recent trends in combining evolutionary and neural computation, that will show that synergies between the two fields go beyond the typically quoted example of topology optimisation of neural networks. We strive to increase the awareness for these trends in the neural computation community and spark some interest in one or the other of the shown directions. 1
Evolving From Genetic Algorithms to Flexible Evolution Agents
, 2002
"... In this paper an alternative design for the internal structure of Evolutionary Algorithms is presented and a first implementation of such structure is tested. In order to improve the internal structure architecture of Evolutionary Algorithms as well as to learn which the adequate values of par ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper an alternative design for the internal structure of Evolutionary Algorithms is presented and a first implementation of such structure is tested. In order to improve the internal structure architecture of Evolutionary Algorithms as well as to learn which the adequate values of parameters and/or structure of operators are , a review showing the successive efforts made in this line is included. In this way, the evolution from the simplest GA to the present advanced structures is illustrated using figures which summarize their characteristics.
Genesis of Organic Computing Systems: Coupling Evolution and Learning
"... Abstract. Organic computing calls for efficient adaptive systems in which flexibility is not traded in against stability and robustness. Such systems have to be specialized in the sense that they are biased towards solving instances from certain problem classes, namely those problems they may face i ..."
Abstract
 Add to MetaCart
Abstract. Organic computing calls for efficient adaptive systems in which flexibility is not traded in against stability and robustness. Such systems have to be specialized in the sense that they are biased towards solving instances from certain problem classes, namely those problems they may face in their environment. Nervous systems are perfect examples. Their specialization stems from evolution and development. In organic computing, simulated evolutionary structure optimization can create artificial neural networks for particular environments. In this chapter, trends and recent results in combining evolutionary and neural computation are reviewed. The emphasis is put on the influence of evolution and development on the structure of neural systems. It is demonstrated how neural structures can be evolved that efficiently learn solutions for problems from a particular problem class. Simple examples of systems that “learn to learn ” as well as technical solutions for the design of turbomachinery components are presented. 1
Algorithms, Performance
"... Learning the optimal probabilities of applying an exploration operator from a set of alternatives can be done by selfadaptation or by adaptive allocation rules. In this paper we consider the latter option. The allocation strategies discussed in the literature basically belong to the class of probab ..."
Abstract
 Add to MetaCart
Learning the optimal probabilities of applying an exploration operator from a set of alternatives can be done by selfadaptation or by adaptive allocation rules. In this paper we consider the latter option. The allocation strategies discussed in the literature basically belong to the class of probability matching algorithms. These strategies adapt the operator probabilities in such a way that they match the reward distribution. In this paper we introduce an alternative adaptive allocation strategy, called the adaptive pursuit method. We compare this method with the probability matching approach in a nonstationary environment. Calculations and experimental results show the superior performance of the adaptive pursuit algorithm. If the reward distributions stay stationary for some time, the adaptive pursuit method converges rapidly and accurately to an operator probability distribution that results in a much higher probability of selecting the current optimal operator and a much higher average reward than with the probability matching strategy. Yet most importantly, the adaptive pursuit scheme remains sensitive to changes in the reward distributions, and reacts swiftly to nonstationary shifts in the environment.
Use of Gene Dependent Mutation Probability in Evolutionary Neural Networks for NonStationary Problems ⋆
"... In this article, the authors investigate the application of Genetic Algorithms (GAs) with gene dependent mutation probability to the training of Artificial Neural Networks (ANNs) in nonstationary problems. In the problems studied, the function mapped by an ANN changes during the search carried out ..."
Abstract
 Add to MetaCart
In this article, the authors investigate the application of Genetic Algorithms (GAs) with gene dependent mutation probability to the training of Artificial Neural Networks (ANNs) in nonstationary problems. In the problems studied, the function mapped by an ANN changes during the search carried out by the GA. In the GA proposed, each gene is associated with an independent mutation probability. The knowledge obtained during the evolution is used to update the mutation probabilities. If the modification of a set of genes is useful when the problem changes its profile, the mutation probabilities of these genes are increased. As a result, the search is concentrated into regions associated with genes presenting higher mutation probabilities.