Results 1  10
of
14
Exploring Hyperheuristic Methodologies with Genetic Programming
"... Hyperheuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyperheuristic idea is to generate new heuristics which are n ..."
Abstract

Cited by 34 (14 self)
 Add to MetaCart
(Show Context)
Hyperheuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyperheuristic idea is to generate new heuristics which are not currently known. These approaches operate on a search space of heuristics rather than directly on a search space of solutions to the underlying problem which is the case with most metaheuristics implementations. In the majority of hyperheuristic studies so far, a framework is provided with a set of human designed heuristics, taken from the literature, and with good measures of performance in practice. A less well studied approach aims to generate new heuristics from a set of potential heuristic components. The purpose of this chapter is to discuss this class of hyperheuristics, in which Genetic Programming is the most widely used methodology. A detailed discussion is presented including the steps needed to apply this technique, some representative case studies, a literature review of related work, and a discussion of relevant issues. Our aim is to convey the exciting potential of this innovative approach for automating the heuristic design process
Global Optimization For Constrained Nonlinear Programming
, 2001
"... In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM dn ) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM dn, leading to the firstorder necessary and sufficient condition for CLM dn. To find
Tuning Strategies In Constrained Simulated Annealing For Nonlinear Global Optimization
 Int’l J. of Artificial Intelligence Tools
, 2000
"... This paper studies various strategies in constrained simulated annealing (CSA), a global optimization algorithm that achieves asymptotic convergence to constrained global minima (CGM) with probability one for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
This paper studies various strategies in constrained simulated annealing (CSA), a global optimization algorithm that achieves asymptotic convergence to constrained global minima (CGM) with probability one for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for discrete constrained local minima (CLM) in the theory of discrete Lagrange multipliers and its extensions to continuous and mixedinteger constrained NLPs. The strategies studied include adaptive neighborhoods, distributions to control sampling, acceptance probabilities, and cooling schedules. We report much better solutions than the bestknown solutions in the literature on two sets of continuous benchmarks and their discretized versions.
Statistical Generalization Of PerformanceRelated Heuristics for KnowledgeLean Applications
 Int'l J. of Artificial Intelligence Tools
, 1996
"... In this paper, we present new results on the automated generalization of performancerelated heuristics learned for knowledgelean applications. We study methods to statistically generalize new heuristics learned for some small subsets of a problem space (using methods such as geneticsbased learnin ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we present new results on the automated generalization of performancerelated heuristics learned for knowledgelean applications. We study methods to statistically generalize new heuristics learned for some small subsets of a problem space (using methods such as geneticsbased learning) to unlearned problem subdomains. Our method uses a new statistical metric called probability of win. By assessing the performance of heuristics in a rangeindependent and distributionindependent manner, we can compare heuristics across problem subdomains in a consistent manner. To illustrate our approach, we show experimental results on generalizing heuristics learned for sequential circuit testing, VLSI cell placement and routing, and branchandbound search. We show that generalization can lead to new and robust heuristics that perform better than the original heuristics across problem instances of different characteristics. 1 Introduction Heuristics or heuristic methods (HMs), in ge...
Generalization and Generalizability Measures
 IEEE Transactions on Knowledge and Data Engineering
, 1999
"... In this paper, we define the generalization problem, summarize various approaches in generalization, identify the credit assignment problem, and present the problem and some solutions in measuring generalizability. We discuss anomalies in the ordering of hypotheses in a subdomain when performance is ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we define the generalization problem, summarize various approaches in generalization, identify the credit assignment problem, and present the problem and some solutions in measuring generalizability. We discuss anomalies in the ordering of hypotheses in a subdomain when performance is normalized and averaged, and show conditions under which anomalies can be eliminated. To generalize performance across subdomains, we present a measure called probability of win that measures the probability whether a hypothesis is better than another. Finally, we discuss some limitations in using probabilities of win and illustrate their application in finding new parameter values for TimberWolf, a package for VLSI cell placement and routing. 1 Introduction Generalization in psychology is the tendency to respond in the same way to different but similar stimuli [6]. Such transfer of tendency may be based on temporal stimuli, spatial cues, or other physical characteristics. Learning, on the...
Algorithms for Combinatorial Optimization in Real Time and their Automated Refinement by GeneticsBased Learning
 UNIVERSITY OF ILLINOIS AT URBANACHAMPAIGN
, 1994
"... The goal of this research is to develop a systematic, integrated method of designing efficient search algorithms that solve optimization problems in real time. Search algorithms studied in this thesis comprise metacontrol and primitive search. The class of optimization problems addressed are called ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The goal of this research is to develop a systematic, integrated method of designing efficient search algorithms that solve optimization problems in real time. Search algorithms studied in this thesis comprise metacontrol and primitive search. The class of optimization problems addressed are called combinatorial optimization problems, examples of which include many NPhard scheduling and planning problems, and problems in operations research and artificialintelligence applications. The problems we have addressed have a welldefined problem objective and a finite set of welldefined problem constraints. In this research, we use statespace trees as problem representations. The approach we have undertaken in designing efficient search algorithms is an engineering approach and consists of two phases: (a) designing generic search algorithms, and (b) improving by geneticsbased machine learning methods parametric heuristics used in the search algorithms designed. Our approach is a systematic method that integrates domain knowledge, search techniques, and automated learning techniques for designing better search algorithms. Knowledge captured in designing one search algorithm can be carried over for designing new ones.
The Theory And Applications Of Discrete Constrained Optimization Using Lagrange Multipliers
, 2000
"... In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixedinteger problems by coding continuous variables using a floatingpoint representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixedinteger constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.
Automated Design of KnowledgeLean Heuristics: Learning, Resource Scheduling, and Generalization
, 1996
"... In this thesis we present new methods for the automated design of new heuristics in knowledgelean applications and for finding heuristics that can be generalized to unlearned test cases. These applications lack domain knowledge for credit assignment; hence, operators for composing new heuristics ar ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this thesis we present new methods for the automated design of new heuristics in knowledgelean applications and for finding heuristics that can be generalized to unlearned test cases. These applications lack domain knowledge for credit assignment; hence, operators for composing new heuristics are generally model free, domain independent, and syntactic in nature. The operators we have used are genetics based; examples of which include mutation and crossover. Learning is based on a generateandtest paradigm that maintains a pool of competing heuristics, tests them to a limited extent, creates new ones from those that perform well in the past, and prunes poor ones from the pool. We have studied four important issues in learning better heuristics: (a) partitioning of a problem domain into smaller subsets, called subdomains, so that performance values within each subdomain can be evaluated statistically, (b) anomalies in performance evaluation within a subdomain, (c) rational scheduling of limited computational resources in testing candidate heuristics in singleobjective as well as multiobjective learning, and (d) finding heuristics that can be generalized to unlearned subdomains. We show experimental results in learning better heuristics for (a) process placement for distributedmemory multicomputers, (b) node decomposition in a branchandbound search, (c) generation of test patterns in VLSI circuit testing, (d) VLSI cell placement and routing, and (e) blind equalization.
GeneticsBased Learning And Statistical Generalization
 KnowledgeBased Systems: Advanced Concepts, Tools and Applications
, 1997
"... Introduction Heuristics are generally used in many realworld engineering applications ranging from computer aided design, optimization, scheduling and computer communications. Since the relationship between performance and control is unknown in heuristics, some parameters, functions, and procedure ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Introduction Heuristics are generally used in many realworld engineering applications ranging from computer aided design, optimization, scheduling and computer communications. Since the relationship between performance and control is unknown in heuristics, some parameters, functions, and procedures are designed either based on user experience or experimentally. These heuristics can usually be improved by automated tuning, machine learning, and generalization. In this chapter, we study the problem of performance generalization of the heuristics learned. 1.1 Terminologies We define a problem solver as an algorithm, or more generally, a software package used to solve a problem. A problem solver can be regarded as a black box, with some heuristic components or heuristics designed in an ad hoc way, where a heuristic is "A process that may solve a problem but offers no guarantees of doing so" 1 . Heu
Teacher: A GeneticsBased System for Learning and for Generalizing Heuristics
 Evolutionary Computation. World Scientific Publishing Co. Pte. Ltd
, 1998
"... In this chapter, we present the design of Teacher (an acronym for TEchniques for the Automated Creation of HEuRistics), a system for learning and for generalizing heuristics used in problem solving. Our system learns knowledgelean heuristics whose performance is measured statistically. The objectiv ..."
Abstract
 Add to MetaCart
In this chapter, we present the design of Teacher (an acronym for TEchniques for the Automated Creation of HEuRistics), a system for learning and for generalizing heuristics used in problem solving. Our system learns knowledgelean heuristics whose performance is measured statistically. The objective of the design process is to find, under resource constraints, improved heuristic methods (HMs) as compared to existing ones. Teacher addresses five general issues in learning heuristics: (1) decomposition of a problem solver into smaller components and integration of HMs designed for each together; (2) classification of an application domain into subdomains so that performance can be evaluated statistically for each; (3) generation of new and improved HMs based on past performance information and heuristics generated; (4) evaluation of each HM's performance; and (5) performance generalization to find HMs that perform well across the entire application domain. Teacher employs a genetics...