Results 1  10
of
21
The Proportional Genetic Algorithm: Gene Expression in a Genetic Algorithm
 University of Central Florida
, 2002
"... We introduce a genetic algorithm (GA) with a new representation method which we call the proportional GA (PGA). The PGA is a multicharacter GA that relies on the existence or nonexistence of genes to determine the information that is expressed. The information represented by a PGA individual depen ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
We introduce a genetic algorithm (GA) with a new representation method which we call the proportional GA (PGA). The PGA is a multicharacter GA that relies on the existence or nonexistence of genes to determine the information that is expressed. The information represented by a PGA individual depends only on what is present on the individual and not on the order in which it is present. As a result, the order of the encoded information is free to evolve in response factors other than the value of the solution, for example, in response to the identification and formation of building blocks. The PGA is also able to dynamically evolve the resolution of encoded information. In this paper, we describe our motivations for developing this representation and provide a detailed description of a PGA along with discussion of its benefits and drawbacks. We compare the behavior of a PGA with that of a canonical GA (CGA) and discuss conclusions and future work based on these preliminary studies.
Efficient Linkage Discovery by Limited Probing. Evolutionary computation 12
, 2004
"... Abstract. This paper addresses the problem of determining the epistatic linkage of a function from binary strings to the reals. There is a close relationship between the Walsh coefficients of the function and “probes ” (or perturbations) of the function. This relationship leads to two linkage detect ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper addresses the problem of determining the epistatic linkage of a function from binary strings to the reals. There is a close relationship between the Walsh coefficients of the function and “probes ” (or perturbations) of the function. This relationship leads to two linkage detection algorithms that generalize earlier algorithms of the same type. A rigorous complexity analysis is given of the first algorithm. The second algorithm not only detects the epistatic linkage, but also computes all of the Walsh coefficients. This algorithm is much more efficient than previous algorithms for the same purpose. 1
Building Predictors from Vertically Distributed Data
 Proceedings of the 14th Annual IBM Centers for Advanced Studies Conference
, 2004
"... Due in part to the large volume of data available today, but more importantly to privacy concerns, data are often distributed across institutional, geographical and organizational boundaries rather than being stored in a centralized location. Data can be distributed by separating objects or attribut ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Due in part to the large volume of data available today, but more importantly to privacy concerns, data are often distributed across institutional, geographical and organizational boundaries rather than being stored in a centralized location. Data can be distributed by separating objects or attributes: in the homogeneous case, sites contain subsets of objects with all attributes, while in the heterogeneous case sites contain subsets of attributes for all objects. Ensemble approaches combine the results obtained from a number of classifiers to obtain a final classification. In this paper, we present a novel ensemble approach, in which data is partitioned by attributes. We show that this method can successfully be applied to a wide range of data and can even produce an increase in classification accuracy compared to a centralized technique. As an ensemble approach, our technique exchanges models or classification results instead of raw data, which makes it suitable for privacy preserving data mining. In addition, both final model size and runtime are typically reduced compared to a centralized model. The proposed technique is evaluated using a decision tree, a variety of datasets, and several voting schemes. This approach is suitable for physically distributed data as well as privacy preserving data mining. Copyright c ○ 2004 Sabine McConnell and David
ChiSquare matrix: an approach for buildingblock identification
 M.J. Maher (Ed.): 9th Asian Computing Science Conference, 2004
"... Abstract. This paper presents a line of research in genetic algorithms (GAs), called buildingblock identification. The building blocks (BBs) are common structures inferred from a set of solutions. In simple GA, crossover operator plays an important role in mixing BBs. However, the crossover probabl ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a line of research in genetic algorithms (GAs), called buildingblock identification. The building blocks (BBs) are common structures inferred from a set of solutions. In simple GA, crossover operator plays an important role in mixing BBs. However, the crossover probably disrupts the BBs because the cut point is chosen at random. Therefore the BBs need to be identified explicitly so that the solutions are efficiently mixed. Let S be a set of binary solutions and the solution s = b1...bℓ, bi∈{0, 1}. We construct a symmetric matrix of which the element in row i and column j, denoted by mij, isthe chisquare of variables bi and bj. The larger the mij is, the higher the dependency is between bit i and bit j. Ifmij is high, bit i and bit j should be passed together to prevent BB disruption. Our approach is validated for additively decomposable functions (ADFs) and hierarchically decomposable functions (HDFs). In terms of scalability, our approach shows a polynomial relationship between the number of function evaluations required to reach the optimum and the problem size. A comparison between the chisquare matrix and the hierarchical Bayesian optimization algorithm (hBOA) shows that the matrix computation is 10 times faster and uses 10 times less memory than constructing the Bayesian network. 1
Almost Tight Upper Bound for Finding Fourier Coefficients of Bounded PseudoBoolean Functions
"... 1 ..."
SUSTAINABLE EVOLUTIONARY ALGORITHMS AND SCALABLE EVOLUTIONARY SYNTHESIS OF DYNAMIC SYSTEMS
, 2004
"... This dissertation concerns the principles and techniques for scalable evolutionary computation to achieve better solutions for larger problems with more computational resources. It suggests that many of the limitations of existent evolutionary algorithms, such as premature convergence, stagnation, l ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This dissertation concerns the principles and techniques for scalable evolutionary computation to achieve better solutions for larger problems with more computational resources. It suggests that many of the limitations of existent evolutionary algorithms, such as premature convergence, stagnation, loss of diversity, lack of reliability and efficiency, are derived from the fundamental convergent evolution model, the oversimplified “survival of the fittest” Darwinian evolution model. Within this model, the higher the fitness the population achieves, the more the search capability is lost. This is also the case for many other conventional search techniques. The main result of this dissertation is the introduction of a novel sustainable evolution model, the Hierarchical Fair Competition (HFC) model, and corresponding five sustainable evolutionary algorithms (EA) for evolutionary search. By maintaining individuals in hierarchically organized fitness levels and keeping evolution going at all fitness levels, HFC transforms the conventional convergent evolutionary computation model into a sustainable search framework by ensuring a continuous supply and incorporation of lowlevel building blocks and by culturing and maintaining building blocks of intermediate levels with its
MultiChromosomal Representations and Chromosome Shuffling in Evolutionary Algorithms
 In Proc. of the 2003 Congress on Evolutionary Computation Conference
, 2003
"... We present experiments investigating the use of multichromosomal representations in evolutionary algorithms. Specifically, the conventional representation of parameters on a single chromosome is compared to a genotype encoding with multiple chromosomes on a set of test functions. In this context w ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We present experiments investigating the use of multichromosomal representations in evolutionary algorithms. Specifically, the conventional representation of parameters on a single chromosome is compared to a genotype encoding with multiple chromosomes on a set of test functions. In this context we present chromosome shuffling, a genetic operator recombining complete chromosomes based on biological evidence. The hypothesis that the multichromosomal representation ameliorates the transmission of good subsolutions to the population is tested on functions of varying degree of complexity.
Simultaneity matrix for solving hierarchically decomposable functions
 Proceedings of the Genetic and Evolutionary Computation
, 2004
"... Abstract. The simultaneity matrix is an ℓ×ℓ matrix of numbers. It is constructed according to a set of ℓbit solutions. The matrix element mij is the degree of linkage between bit positions i and j. To exploit the matrix, we partition {0,...,ℓ − 1} by putting i and j in the same partition subset if ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The simultaneity matrix is an ℓ×ℓ matrix of numbers. It is constructed according to a set of ℓbit solutions. The matrix element mij is the degree of linkage between bit positions i and j. To exploit the matrix, we partition {0,...,ℓ − 1} by putting i and j in the same partition subset if mij is significantly high. The partition represents the bit positions of building blocks (BBs). The partition is used in solution recombination so that the bits governed by the same partition subset are passed together. It can be shown that by exploiting the simultaneity matrix the hierarchically decomposable functions can be solved in a polynomial relationship between the number of function evaluations required to reach the optimum and the problem size. A comparison to the hierarchical Bayesian optimization algorithm (hBOA) is made. The hBOA uses less number of function evaluations than that of our algorithm. However, computing the matrix is 10 times faster and uses 10 times less memory than constructing Bayesian network. 1
The Proportional Genetic Algorithm
, 2002
"... This paper summarizes the initial studies of a new genetic algorithm (GA) representation method which we call the proportional genetic algorithm (PGA). Additional details of this work may be found elsewhere. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper summarizes the initial studies of a new genetic algorithm (GA) representation method which we call the proportional genetic algorithm (PGA). Additional details of this work may be found elsewhere.
Mutation Rates of the (1+1)EA on PseudoBoolean Functions of Bounded Epistasis
"... When the epistasis of the fitness function is bounded by a constant, we show that the expected fitness of an offspring of the (1+1)EA can be efficiently computed for any point. Moreover, we show that, for any point, it is always possible to efficiently retrieve the “best ” mutation rate at that poi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
When the epistasis of the fitness function is bounded by a constant, we show that the expected fitness of an offspring of the (1+1)EA can be efficiently computed for any point. Moreover, we show that, for any point, it is always possible to efficiently retrieve the “best ” mutation rate at that point in the sense that the expected fitness of the resulting offspring is maximized. On linear functions, it has been shown that a mutation rate of 1/n is provably optimal. On functions where epistasis is bounded by a constant k, we show that for sufficiently high fitness, the commonly used mutation rate of 1/n is also best, at least in terms of maximizing the expected fitness of the offspring. However, we find for certain ranges of the fitness function, a better mutation rate can be considerably higher, and can be found by solving for the real roots of a degreek polynomial whose coefficients contain the nonzero Walsh coefficients of the fitness function. Simulation results on maximum ksatisfiability problems and NKlandscapes show that this expectationmaximized mutation rate can cause significant gains early in search.