Results 1  10
of
20
Foundations of Genetic Programming
, 2002
"... The goal of getting computers to automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called “machine intelligence ” [161, 162]. ..."
Abstract

Cited by 219 (65 self)
 Add to MetaCart
The goal of getting computers to automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called “machine intelligence ” [161, 162].
Genetic Programming and Data Structures
, 1996
"... This thesis investigates the evolution and use of abstract data types within Genetic Programming (GP). In genetic programming the principles of natural evolution (fitness based selection and recombination) acts on program code to automatically generate computer programs. The research in this thesis ..."
Abstract

Cited by 49 (25 self)
 Add to MetaCart
This thesis investigates the evolution and use of abstract data types within Genetic Programming (GP). In genetic programming the principles of natural evolution (fitness based selection and recombination) acts on program code to automatically generate computer programs. The research in this thesis is motivated by the observation from software engineering that data abstraction (e.g. via abstract data types) is essential in programs created by human programmers. We investigate whether abstract data types can be similarly beneficial to the automatic production of programs using GP. GP can automatically "evolve" programs which solve nontrivial problems but few experiments have been reported where the evolved programs explicitly manipulate memory and yet memory is an essential component of most computer programs. So far work on evolving programs that explicitly use memory has principally used either problem specific memory models or a simple indexed memory model consisting of a single glo...
What makes a problem GPhard? analysis of a tunably difficult problem in genetic programming. Genetic Programming and Evolvable Machines
, 2001
"... This paper addresses the issue of what makes a problem GPhard by considering the binomial3 problem. In the process, we discuss the efficacy of the metaphor of an adaptive fitness landscape to explain what is GPhard. We show that for at least this problem, the metaphor is misleading. 1 ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
This paper addresses the issue of what makes a problem GPhard by considering the binomial3 problem. In the process, we discuss the efficacy of the metaphor of an adaptive fitness landscape to explain what is GPhard. We show that for at least this problem, the metaphor is misleading. 1
Multiple Interacting Programs: A Representation for Evolving Complex Behaviors
 CYBERNETICS AND SYSTEMS
, 1998
"... This paper defines a representation for expressing complex behaviors, called multiple interacting programs (MIPs), and describes an evolutionary method for evolving solutions to difficult problems expressed as MIPs structures. The MIPs representation is a generalization of neural network architectur ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
This paper defines a representation for expressing complex behaviors, called multiple interacting programs (MIPs), and describes an evolutionary method for evolving solutions to difficult problems expressed as MIPs structures. The MIPs representation is a generalization of neural network architectures that can model any type of dynamic system. The evolutionary training method described is based on an evolutionary program originally used to evolve the architecture and weights of recurrent neural networks. Example experiments demonstrate the training method's ability to evolve appropriate MIPs solutions for difficult problems. An analysis of the evolved solutions shows their dynamics to be interesting and nontrivial.
An Investigation of Supervised Learning in Genetic Programming
, 1998
"... This thesis is an investigation into Supervised Learning (SL) in Genetic Programming (GP). With its flexible treestructured representation, GP is a type of Genetic Algorithm, using the Darwinian idea of natural selection and genetic recombination, evolving populations of solutions over many generat ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
This thesis is an investigation into Supervised Learning (SL) in Genetic Programming (GP). With its flexible treestructured representation, GP is a type of Genetic Algorithm, using the Darwinian idea of natural selection and genetic recombination, evolving populations of solutions over many generations to solve problems. SL is a common approach in Machine Learning where the problem is presented as a set of examples. A good or fit solution is one which can successfully deal with all of the examples. In common with most Machine Learning approaches, GP has been used to solve many trivial problems. When applied to larger and more complex problems, however, several difficulties become apparent. When focusing on the basic features of GP, this thesis highlights the immense size of the GP search space, and describes an approach to measure this space. A stupendously flexible but frustratingly useless representation, Anarchically Automatically Defined Functions, is described. Some difficulties...
The reliability of confidence intervals for computational effort comparisons
 in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO
, 2007
"... This paper analyses the reliability of confidence intervals for Koza’s computational effort statistic. First, we conclude that dependence between the observed minimum generation and the observed cumulative probability of success leads to the production of more reliable confidence intervals for our p ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper analyses the reliability of confidence intervals for Koza’s computational effort statistic. First, we conclude that dependence between the observed minimum generation and the observed cumulative probability of success leads to the production of more reliable confidence intervals for our preferred method. Second, we show that confidence intervals from 80 % to 95 % have appropriate levels of performance. Third, simulated data is used to consider the effect of large minimum generations and the confidence intervals are again found to be reliable. Finally, results from four large datasets collected from real genetic programming experiments are used to provide even more empirical evidence that the method for producing confidence intervals is reliable.
Characterizing a tunably difficult problem in genetic programming
 In GECCO
, 2000
"... This paper examines the behavioral phenomena that occur with the tuning of the binomial3 problem. Our analysis identifies a distinct set of phenomena that may be generalizable to other problems. These phenomena also bring into question whether GA theory has any bearing on GP theory. 1 ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper examines the behavioral phenomena that occur with the tuning of the binomial3 problem. Our analysis identifies a distinct set of phenomena that may be generalizable to other problems. These phenomena also bring into question whether GA theory has any bearing on GP theory. 1
Confidence intervals for computational effort comparisons
 Genetic Programming. Proceedings of the 10th European Conference, EuroGP 2007
, 2007
"... Abstract. When researchers make alterations to the genetic programming algorithm they almost invariably wish to measure the change in performance of the evolutionary system. No one specific measure is standard, but Koza’s computational effort statistic is frequently used [8]. In this paper the use o ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. When researchers make alterations to the genetic programming algorithm they almost invariably wish to measure the change in performance of the evolutionary system. No one specific measure is standard, but Koza’s computational effort statistic is frequently used [8]. In this paper the use of Koza’s statistic is discussed and a study is made of three methods that produce confidence intervals for the statistic. It is found that an approximate 95 % confidence interval can be easily produced. 1
Evolution of Genetic Programming Populations
, 1996
"... We investigate in detail what happens as genetic programming (GP) populations evolve. Since we shall use the populations which showed GP can evolve stack data structures as examples, we start in Section 1 by briefly describing the stack experiment [ Langdon, 1995 ] . In Section 2 we show Price's Cov ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We investigate in detail what happens as genetic programming (GP) populations evolve. Since we shall use the populations which showed GP can evolve stack data structures as examples, we start in Section 1 by briefly describing the stack experiment [ Langdon, 1995 ] . In Section 2 we show Price's Covariance and Selection Theorem can be applied to Genetic Algorithms (GAs) and GP to predict changes in gene frequencies. We follow the proof of the theorem with experimental justification using the GP runs from the stack problem. Section 3 briefly describes Fisher's Fundamental Theorem of Natural Selection and shows in its normal interpretation it does not apply to practical GAs. An analysis of the stack populations, in Section 4, explains that the difficulty of the stack problem is due to the presence of "deceptive" high scoring partial solutions in the population. These cause a negative correlation between necessary primitives and fitness. As Price's Theorem predicts, the frequency of neces...