## Wise Breeding GA via Machine Learning Techniques for Function Optimization (2003)

### Cached

### Download Links

- [ftp-illigal.ge.uiuc.edu]
- [www.cs.york.ac.uk]
- DBLP

### Other Repositories/Bibliography

Venue: | Lecture Notes in Computer Science, 2723:1172 |

Citations: | 6 - 0 self |

### BibTeX

@INPROCEEDINGS{Llora03wisebreeding,

author = {Xavier Llora and Xavier Llora and David E. Goldberg and David E. Goldberg},

title = {Wise Breeding GA via Machine Learning Techniques for Function Optimization},

booktitle = {Lecture Notes in Computer Science, 2723:1172},

year = {2003},

pages = {1172--1183},

publisher = {Springer}

}

### OpenURL

### Abstract

This paper explores how inductive machine learning can guide the breeding process of evolutionary algorithms for black-box function optimization. In particular, decision trees are used to identify the underlying characteristics of good and bad individuals, using the mined knowledge for wise breeding purposes. Inductive learning is complemented with statistical learning in order to define the breeding process. The proposed evolutionary process optimizes the target function dually, maximizing and minimizing it at the same time. The paper also summarize some tuning and population sizing issues, as well as some preliminary results obtained using the proposed algorithm.

### Citations

7363 |
Genetic Algorithms
- Goldberg
- 1989
(Show Context)
Citation Context ...a set D for supervised learning. In this example, the size of the individuals in the population is # = 4, whereas the population size |P| = 27. The fitness f(x) is computed using the OneMax function (=-=Goldberg, 1989-=-; Goldberg, 2002). After sorting the population according to f(x), we split the population in three subsets: the best individuals, the worst individuals, and the mediocre ones. We will discuss more ab... |

4956 |
C4.5: Programs for machine learning
- Quinlan
- 1993
(Show Context)
Citation Context ...d inductive tree based evolution) is the result of mixing a PMBGA and inductive decision trees. Properly speaking, SI3E defines a common framework based on PBIL (Baluja, 1994) and ID3 (Quinlan, 1986; =-=Quinlan, 1993-=-). This section describes both algorithms briefly, PBIL and ID3, in order to explain how SI3E integrates both for obtaining a model-based breeding GA in the next section. 2.1 Population Based Incremen... |

3364 | Probabilistic decision trees
- Quinlan
- 1990
(Show Context)
Citation Context ...(statistical and inductive tree based evolution) is the result of mixing a PMBGA and inductive decision trees. Properly speaking, SI3E defines a common framework based on PBIL (Baluja, 1994) and ID3 (=-=Quinlan, 1986-=-; Quinlan, 1993). This section describes both algorithms briefly, PBIL and ID3, in order to explain how SI3E integrates both for obtaining a model-based breeding GA in the next section. 2.1 Population... |

2978 | Data Mining: Practical machine learning tools and techniques - Witten, Frank - 2005 |

1696 | A theory of the learnable
- Valiant
- 1984
(Show Context)
Citation Context ...igure 2), should grow linearly to the length # of the individuals in the population; that is, logarithmically to the size of the search space. For further detail about the PAC model, please refer to (=-=Valiant, 1984-=-; Haussler, 1988; Mitchell, 1997). 4 Experiments In this section we present some preliminary results obtained using SI3E. The conducted experiments involved SI3E and two di#erent functions (OneMax, in... |

635 |
Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. U
- Holland
- 1975
(Show Context)
Citation Context ...describes a characteristic of a good individual (#) or a bad one (#). We can take the interpretation of these rules one step further. They can be seen as a kind of notation of the underlying schemes (=-=Holland, 1975-=-; Goldberg, 1989) for good and bad individuals. This point can be assumed if we agree that the knowledge mined by ID3 is actually the building blocks of good and bad individuals. Inspecting the rules ... |

301 | Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning
- Baluja
- 1994
(Show Context)
Citation Context ... This assumption, not true in many real-world problems, lead to some well-know algorithms. Some examples of algorithms that assume gene independence are: PBIL (Population Based Incremental Learning) (=-=Baluja, 1994-=-), UMDA (Univariate Marginal Distribution Algorithms) (Muhlenbein, 1998), and cGA (compact Genetic Algorithm) (Harik, Lobo, & Goldberg, 1998). Dependences among genes, and what that implies to the dis... |

255 | Hierarchical Bayesian Optimization Algorithm. Toward a New Generation of Evolutionary Algorithms
- Pelikan
- 2005
(Show Context)
Citation Context ...ant work deals with gene dependence modeling and linkage learning. For an overview of these approaches please see [5,8]. An example of this kind of algorithms is BOA (Bayesian Optimization Algorithm) =-=[9]-=-. BOA bases its probability distribution on a Bayesian network. This network describes E. Cantú-Paz et al. (Eds.): GECCO 2003, LNCS 2723, pp. 1172–1183, 2003. c○ Springer-Verlag Berlin Heidelberg 2003... |

254 |
The Design of Innovation Lessons from and for Competent Genetic Algorithms
- Goldberg
- 2002
(Show Context)
Citation Context ...rvised learning. In this example, the size of the individuals in the population is # = 4, whereas the population size |P| = 27. The fitness f(x) is computed using the OneMax function (Goldberg, 1989; =-=Goldberg, 2002-=-). After sorting the population according to f(x), we split the population in three subsets: the best individuals, the worst individuals, and the mediocre ones. We will discuss more about how this spl... |

239 | From recombination of genes to the estimation of distributions I. Binary parameters
- Mühlenbein, Paaß
- 1996
(Show Context)
Citation Context ...roposed algorithm. 1 Introduction Recently, a new interest in the genetic algorithms (GA) community has been growing. The work published by Baluja [1,2], Juels & Wattenberg [3], and Mühlenbein & Paaß =-=[4]-=-—among others—sparked a new way to approach to GA. Instead of recombining genes, as in a traditional GA, this new approach proposes the usage of explicit statistics as the main breeding force. These k... |

231 | The compact genetic algorithm
- Harik, Lobo, et al.
- 1999
(Show Context)
Citation Context ...ples of algorithms that assume gene independence are: PBIL (Population Based Incremental Learning) [1], UMDA (Univariate Marginal Distribution Algorithms) [6], and the cGA (compact Genetic Algorithm) =-=[7]-=-. Dependences among genes, and what that implies to the distributions to be learned, have also been studied by several authors. Relevant work deals with gene dependence modeling and linkage learning. ... |

223 |
Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework
- Haussler
- 1988
(Show Context)
Citation Context ... fill the unspecified possitions of r using p(x|P # ) FI P(t+1) # P # # {x} i # i+1 DONE RETURN P Figure 2: Breeding algorithm used by SI3E. as su#cient examples are provided. This bound is computed (=-=Haussler, 1988-=-) as m # 1 # # ln 1 # + # ln 2 # (9) Equation 9 suggest an interesting result for SI3E, when analyzed from an asymptotically: m must grow at O(#). Therefore, the number of new individuals generated, M... |

178 | Removing the Genetics from the Standard Genetic Algorithm
- Baluja, Caruana
- 1995
(Show Context)
Citation Context ...sults obtained using the proposed algorithm. 1 Introduction Recently, a new interest in the genetic algorithms (GA) community has been growing. The work published by Baluja (1994), later extended in (=-=Baluja, 1995-=-), Muhlenbein and Paa (1996)---among others---sparked a new way to approach to GA. Instead of recombining genes, like traditional GA, this new approach proposes the usage of explicit statistics as the... |

103 | The equation for response to selection and its use for prediction
- Mühlenbein
- 1997
(Show Context)
Citation Context ...e well-know algorithms. Some examples of algorithms that assume gene independence are: PBIL (Population Based Incremental Learning) (Baluja, 1994), UMDA (Univariate Marginal Distribution Algorithms) (=-=Muhlenbein, 1998-=-), and cGA (compact Genetic Algorithm) (Harik, Lobo, & Goldberg, 1998). Dependences among genes, and what that implies to the distributions to be learned, have also been studied by several authors. Re... |

83 |
Bayesian Optimization Algorithm: from single level to hierarchy
- Pelikan
- 2002
(Show Context)
Citation Context ...e learned, have also been studied by several authors. Relevant work deals with gene dependence modeling and linkage learning. For an overview of these approaches please see (Larranaga & Lozano, 2002; =-=Pelikan, 2002-=-). An example of this kind of algorithms is BOA (Bayesian Optimization Algorithm) (Pelikan, Goldberg, & Cantu-Paz, 1999). BOA bases its probability distribution on a Bayesian network. This network des... |

59 |
Estimation of Distribution Algorithms
- Larrañaga, Lozano
- 2002
(Show Context)
Citation Context ...w approach proposes the usage of explicit statistics as the main breeding force. These kind of GA are known as probabilistic model building GA (PMBGA), or estimation of distribution algorithms (EDAs) =-=[5]-=-. Instead of using crossover or mutation operators, these GA breed a new population of individuals sampling a learned probabilistic model that describes the good individuals in the population. Some ea... |

47 | Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms
- Juels, Wattenberg
- 1994
(Show Context)
Citation Context ...esults obtained using the proposed algorithm. 1 Introduction Recently, a new interest in the genetic algorithms (GA) community has been growing. The work published by Baluja [1,2], Juels & Wattenberg =-=[3]-=-, and Mühlenbein & Paaß [4]—among others—sparked a new way to approach to GA. Instead of recombining genes, as in a traditional GA, this new approach proposes the usage of explicit statistics as the m... |

39 |
R.: Inductive Learning System AQ15c: The Method and User’s Guide
- Wnek, Kaufman, et al.
- 1995
(Show Context)
Citation Context ...earning paradigms, evolutionary learning and inductive rule learning. On the one hand, the evolutionary learning uses a simple GA for function optimization. On the other hand, LEM1 and LEM2 uses AQ15 =-=[12]-=- and AQ18 [13] for rule learning. However, LEM is a hybrid approach that still relies on traditional GA mechanisms, although they can be turned off, moving it away from the ideas that inspired PMBGA a... |

36 | Learnable Evolution Model: Evolutionary Processes Guided by
- Michalski
(Show Context)
Citation Context ...used as a fitness function, f :# ## #. Using the fitness function f(x), we can sort the individuals in a population P. Therefore, we know which are the best and worst individuals. As proposed in LEM (=-=Michalski, 2000-=-), we mark the best individuals as positive examples, P # . In the same way, we can mark the worst individuals as negative examples, P # . These sets contain the best and worst individuals seen so far... |

22 |
Learnable Evolution: Combining Symbolic and Evolutionary Learning
- Michalski
(Show Context)
Citation Context ...del building GA for breeding purposes. Some of them are provided by inductive machine learning techniques. An example of this kind of approach is LEM (Learnable Evolution Model) proposed by Michalski =-=[10,11]-=-. LEM combines two different learning paradigms, evolutionary learning and inductive rule learning. On the one hand, the evolutionary learning uses a simple GA for function optimization. On the other ... |

21 |
The equation for response to selection and its use for prediction. Evolutionary Computation 5
- Mühlenbein
- 1998
(Show Context)
Citation Context ...leads to some well-know algorithms. Some examples of algorithms that assume gene independence are: PBIL (Population Based Incremental Learning) [1], UMDA (Univariate Marginal Distribution Algorithms) =-=[6]-=-, and the cGA (compact Genetic Algorithm) [7]. Dependences among genes, and what that implies to the distributions to be learned, have also been studied by several authors. Relevant work deals with ge... |

15 | Analyzing the PBIL algorithm by means of discrete dynamical systems
- González, Lozano, et al.
- 1999
(Show Context)
Citation Context ... size. In this paper we tune these parameters using previous work. The first one, α, is usually set between [0.1,0.2]. These values are common in the reinforcement learning community. Please refer to =-=[17,20]-=- for more information. The other parameter to tune is the population size. Our population sizing model is based on guarantee successful breeding. That is, we size the population in terms of P ⊕ and P ... |

4 |
R.: The AQ18 Machine Learning and Data Mining System: An Implementation and User’s Guide
- Kaufman, Michalski
- 1999
(Show Context)
Citation Context ...gms, evolutionary learning and inductive rule learning. On the one hand, the evolutionary learning uses a simple GA for function optimization. On the other hand, LEM1 and LEM2 uses AQ15 [12] and AQ18 =-=[13]-=- for rule learning. However, LEM is a hybrid approach that still relies on traditional GA mechanisms, although they can be turned off, moving it away from the ideas that inspired PMBGA and EDAs. This ... |

1 |
An empirical comparation of seven iterative and evolutionary function optimization heuristics
- Baluja
- 1995
(Show Context)
Citation Context ...sults obtained using the proposed algorithm. 1 Introduction Recently, a new interest in the genetic algorithms (GA) community has been growing. The work published by Baluja (1994), later extended in (=-=Baluja, 1995-=-), Muhlenbein and Paa (1996)---among others---sparked a new way to approach to GA. Instead of recombining genes, like traditional GA, this new approach proposes the usage of explicit statistics as the... |