Results 1  10
of
171
No Free Lunch Theorems for Optimization
, 1997
"... A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performan ..."
Abstract

Cited by 640 (9 self)
 Add to MetaCart
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to informationtheoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include timevarying optimization problems and a priori “headtohead” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems’ enforcing of a type of uniformity over all algorithms.
Evolutionary Programming Made Faster
 IEEE Transactions on Evolutionary Computation
, 1999
"... Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. EP has rather slow convergence rates, however, on some function optimization problems. In this paper, a "fast EP" (FEP) is proposed which uses a Cauchy instead of Ga ..."
Abstract

Cited by 206 (36 self)
 Add to MetaCart
Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. EP has rather slow convergence rates, however, on some function optimization problems. In this paper, a "fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator. The relationship between FEP and classical EP (CEP) is similar to that between fast simulated annealing and the classical version. Both analytical and empirical studies have been carried out to evaluate the performance of FEP and CEP for different function optimization problems. This paper shows that FEP is very good at search in a large neighborhood while CEP is better at search in a small local neighborhood. For a suite of 23 benchmark problems, FEP performs much better than CEP for multimodal functions with many local minima while being comparable to CEP in performance for unimodal and multimodal functions with only a few local minima. This paper also shows the relationship between the search step size and the probability of finding a global optimum and thus explains why FEP performs better than CEP on some functions but not on others. In addition, the importance of the neighborhood size and its relationship to the probability of finding a nearoptimum is investigated. Based on these analyses, an improved FEP (IFEP) is proposed and tested empirically. This technique mixes different search operators (mutations). The experimental results show that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.
An Immunological Model of Distributed Detection and Its Application to Computer Security
, 1999
"... This dissertation explores an immunological model of distributed detection, called negative detection, and studies its performance in the domain of intrusion detection on computer networks. The goal of the detection system is to distinguish between illegitimate behaviour (nonself ), and legitimate b ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
This dissertation explores an immunological model of distributed detection, called negative detection, and studies its performance in the domain of intrusion detection on computer networks. The goal of the detection system is to distinguish between illegitimate behaviour (nonself ), and legitimate behaviour (self ). The detection system consists of sets of negative detectors that detect instances of nonself; these detectors are distributed across multiple locations. The negative detection model was developed previously; this research extends that previous work in several ways. Firstly, analyses are derived for the negative detection model. In particular, a framework for explicitly incorporating distribution is developed, and is used to demonstrate that negative detection is both scalable and robust. Furthermore, it is shown that any scalable distributed detection system that requires communication (memory sharing) is always less robust than a system that does not require communication...
Adaptive simulated annealing (ASA): Lessons learned
 Control and Cybernetics
, 1996
"... Adaptive simulated annealing (ASA) is a global optimization algorithm based on an associated proof that the parameter space can be sampled much more efficiently than by using other previous simulated annealing algorithms. The author's ASA code has been publicly available for over two years. Durin ..."
Abstract

Cited by 72 (14 self)
 Add to MetaCart
Adaptive simulated annealing (ASA) is a global optimization algorithm based on an associated proof that the parameter space can be sampled much more efficiently than by using other previous simulated annealing algorithms. The author's ASA code has been publicly available for over two years. During this time the author has volunteered to help people via email, and the feedback obtained has been used to further develop the code.
Optimal Ordered Problem Solver
, 2002
"... We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the ..."
Abstract

Cited by 62 (20 self)
 Add to MetaCart
We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the space of domainspecific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience.
Active Learning with Multiple Views
, 2002
"... Active learners alleviate the burden of labeling large amounts of data by detecting and asking the user to label only the most informative examples in the domain. We focus here on active learning for multiview domains, in which there are several disjoint subsets of features (views), each of which i ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Active learners alleviate the burden of labeling large amounts of data by detecting and asking the user to label only the most informative examples in the domain. We focus here on active learning for multiview domains, in which there are several disjoint subsets of features (views), each of which is sufficient to learn the target concept. In this paper we make several contributions. First, we introduce CoTesting, which is the first approach to multiview active learning. Second, we extend the multiview learning framework by also exploiting weak views, which are adequate only for learning a concept that is more general/specific than the target concept. Finally, we empirically show that CoTesting outperforms existing active learners on a variety of real world domains such as wrapper induction, Web page classification, advertisement removal, and discourse tree parsing. 1.
Perhaps Not a Free Lunch But At Least a Free Appetizer
, 1998
"... It is often claimed that Evolutionary Algorithms are superior to other optimization techniques, in particular, in situations where not much is known about the objective function to be optimized. In contrast to that Wolpert and Macready (1997) proved that all optimization techniques have the same ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
It is often claimed that Evolutionary Algorithms are superior to other optimization techniques, in particular, in situations where not much is known about the objective function to be optimized. In contrast to that Wolpert and Macready (1997) proved that all optimization techniques have the same behavior  on average over all f : X ! Y where X and Y are finite sets. This result is called No Free Lunch Theorem. Here different scenarios of optimization are presented. It is argued why the scenario on which the No Free Lunch Theorem is based does not model real life optimization. For more realistic scenarios it is argued why optimization techniques differ in their efficiency. For a small example this claim is proved.
The No Free Lunch and Problem Description Length
 Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001
, 2001
"... The No Free Lunch theorem is reviewed and cast within a simple framework for blackbox search. A duality result which relates functions being optimized to algorithms optimizing them is obtained and is used to sharpen the No Free Lunch theorem. Observations are made concerning problem descriptio ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
The No Free Lunch theorem is reviewed and cast within a simple framework for blackbox search. A duality result which relates functions being optimized to algorithms optimizing them is obtained and is used to sharpen the No Free Lunch theorem. Observations are made concerning problem description length within the context provided by the results of this paper. It is seen that No Free Lunch results are independent from whether or not the set of functions (over which a No Free Lunch result holds) is compressible.
An Overview of Evolutionary Algorithms: Practical Issues and Common Pitfalls
 Information and Software Technology
, 2001
"... An overview of evolutionary algorithms is presented covering genetic algorithms, evolution strategies, genetic programming and evolutionary programming. The schema theorem is reviewed and critiqued. Gray codes, bit representations and realvalued representations are discussed for parameter optimi ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
An overview of evolutionary algorithms is presented covering genetic algorithms, evolution strategies, genetic programming and evolutionary programming. The schema theorem is reviewed and critiqued. Gray codes, bit representations and realvalued representations are discussed for parameter optimization problems. Parallel Island models are also reviewed, and the evaluation of evolutionary algorithms is discussed.