Results 1  10
of
112
Diversity in Genetic Programming: An Analysis of Measures and Correlation with Fitness
, 2004
"... This paper examines measures of diversity in genetic programming. The goal is to understand the importance of such measures and their relationship with fitness. Diversity methods and measures from the literature are surveyed and a selected set of measures are applied to common standard problem insta ..."
Abstract

Cited by 61 (5 self)
 Add to MetaCart
This paper examines measures of diversity in genetic programming. The goal is to understand the importance of such measures and their relationship with fitness. Diversity methods and measures from the literature are surveyed and a selected set of measures are applied to common standard problem instances in an experimental study. Results show the varying definitions and behaviours of diversity and the varying correlation between diversity and fitness during different stages of the evolutionary process. Populations in the genetic programming algorithm are shown to become structurally similar while maintaining a high amount of behavioural differences. Conclusions describe what measures are likely to be important for understanding and improving the search process and why diversity might have different meaning for different problem domains.
A Constructive Algorithm for Training Cooperative Neural Network Ensembles
 IEEE Transactions on Neural Networks
, 2003
"... This paper presents a constructive algorithm for training cooperative neuralnetwork ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on bo ..."
Abstract

Cited by 52 (20 self)
 Add to MetaCart
(Show Context)
This paper presents a constructive algorithm for training cooperative neuralnetwork ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and MackeyGlass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability.
nozPérez, Multiobjective cooperative coevolution of artificial neural networks (multiobjective cooperative networks), Neural Networks 15
, 2002
"... Abstract—This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a suf ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many realworld problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten realworld classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller. Index Terms—Classification, cooperative coevolution, multiobjective optimization, neural network ensembles. I.
Reducing fitness evaluations using clustering techniques and neural network ensembles
 In Genetic and Evolutionary Computation Conference (GECCO–2004
, 2004
"... Abstract. In many realworld applications of evolutionary computation, it is essential to reduce the number of fitness evaluations. To this end, computationally efficient models can be constructed for fitness evaluations to assist the evolutionary algorithms. When approximate models are involved ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
(Show Context)
Abstract. In many realworld applications of evolutionary computation, it is essential to reduce the number of fitness evaluations. To this end, computationally efficient models can be constructed for fitness evaluations to assist the evolutionary algorithms. When approximate models are involved in evolution, it is very important to determine which individuals should be reevaluated using the original fitness function to guarantee a faster and correct convergence of the evolutionary algorithm. In this paper, the knearestneighbor method is applied to group the individuals of a population into a number of clusters. For each cluster, only the individual that is closest to the cluster center will be evaluated using the expensive original fitness function. The fitness of other individuals are estimated using a neural network ensemble, which is also used to detect possible serious prediction errors. Simulation results from three test functions show that the proposed method exhibits better performance than the strategy where only the best individuals according to the approximate model are reevaluated. 1
Neural network regularization and ensembling using multiobjective evolutionary algorithms
 In: Congress on Evolutionary Computation (CEC’04), IEEE
, 2004
"... Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that determines to which extension the regularization in¤uences the learning algorithm must be determined beforehand. This paper addresses the neural network regularization problem from a multiobjective optimization point of view. During the optimization, both structure and parameters of the neural network will be optimized. A slightly modi£ed version of two multiobjective optimization algorithms, the dynamic weighted aggregation (DWA) method and the elitist nondominated sorting genetic algorithm (NSGAII) are used and compared. An evolutionary multiobjective approach to neural network regularization has a number of advantages compared to the traditional methods. First, a number of models with a spectrum of model complexity can be obtained in one optimization run instead of only one single solution. Second, an ef£cient new regularization term can be introduced, which is not applicable to gradientbased learning algorithms. As a natural byproduct of the multiobjective optimization approach to neural network regularization, neural network ensembles can be easily constructed using the obtained networks with different levels of model complexity. Thus, the model complexity of the ensemble can be adjusted by adjusting the weight of each member network in the ensemble. Simulations are carried out on a test function to illustrate the feasibility of the proposed ideas. I.
Analyzing Anticorrelation in Ensemble Learning
, 2001
"... Anticorrelation has been used in training neural network ensembles. Negative correlation learning (NCL) is the state of the art anticorrelation measure. We propose an alternative anticorrelation measure, RTQRTNCL, which shows signicant improvements on our test example, particularly with larg ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Anticorrelation has been used in training neural network ensembles. Negative correlation learning (NCL) is the state of the art anticorrelation measure. We propose an alternative anticorrelation measure, RTQRTNCL, which shows signicant improvements on our test example, particularly with larger ensembles. We analyze the behavior of the negative correlation measure and derive a theoretical explanation of the improved performance of RTQRTNCL in larger ensembles.
Evolving a Cooperative Population of Neural Networks by Minimizing Mutual Information
 In Proceedings of the 2001 Congress on Evolutionary Computation
, 2001
"... Evolutionary ensembles with negative correlation learning (EENCL) is an evolutionary learning system for learning and designing neural network ensembles [1]. The fitness sharing used in EENCL was based on the idea of "covering" the same training patterns by shared individuals. This paper e ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Evolutionary ensembles with negative correlation learning (EENCL) is an evolutionary learning system for learning and designing neural network ensembles [1]. The fitness sharing used in EENCL was based on the idea of "covering" the same training patterns by shared individuals. This paper explores connection between fitness sharing and information concept, and introduces mutual information into EENCL. Through minimization of mutual information, a diverse and cooperative population of neural networks can be evolved by EENCL. The effectiveness of such evolutionary learning approach was tested on two realworld problems.
Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets
"... Abstract. Ensembles are often capable of greater predictive performance than any of their individual classifiers. Despite the need for classifiers to make different kinds of errors, the majority voting scheme, typically used, treats each classifier as though it contributed equally to the group‘s per ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Ensembles are often capable of greater predictive performance than any of their individual classifiers. Despite the need for classifiers to make different kinds of errors, the majority voting scheme, typically used, treats each classifier as though it contributed equally to the group‘s performance. This can be particularly limiting on unbalanced datasets, as one is more interested in complementing classifiers that can assist in improving the true positive rate without signicantly increasing the false positive rate. Therefore, we implement a genetic algorithm based framework to weight the contribution of each classifier by an appropriate fitness function, such that the classifiers that complement each other on the unbalanced dataset are preferred, resulting in significantly improved performances. The proposed framework can be built on top of any collection of classifiers with different fitness functions. 1
CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features
 Journal of Artificial Intelligence Research (JAIR
, 2005
"... In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed ope ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods. 1.