Results 1  10
of
784
Depthfirst IterativeDeepening: An Optimal Admissible Tree Search
 Artificial Intelligence
, 1985
"... The complexities of various search algorithms are considered in terms of time, space, and cost of solution path. It is known that breadthfirst search requires too much space and depthfirst search can use too much time and doesn't always find a cheapest path. A depthfirst iteratiwdeepening algori ..."
Abstract

Cited by 414 (15 self)
 Add to MetaCart
The complexities of various search algorithms are considered in terms of time, space, and cost of solution path. It is known that breadthfirst search requires too much space and depthfirst search can use too much time and doesn't always find a cheapest path. A depthfirst iteratiwdeepening algorithm is shown to be asymptotically optimal along all three dimensions for exponential pee searches. The algorithm has been used successfully in chess programs, has been eflectiuely combined with bidirectional search, and has been applied to bestfirst heuristic search as well. This heuristic depthfirst iteratiwdeepening algorithm is the only known algorithm that is capable of finding optimal solutions to randomly generated instances of the Fifeen Puzzle within practical resource limits. 1.
Reconciling Schemas of Disparate Data Sources: A MachineLearning Approach
 In SIGMOD Conference
, 2001
"... A dataintegration system provides access to a multitude of data sources through a single mediated schema. A key bottleneck in building such systems has been the laborious manual construction of semantic mappings between the source schemas and the mediated schema. We describe LSD, a system that empl ..."
Abstract

Cited by 350 (47 self)
 Add to MetaCart
A dataintegration system provides access to a multitude of data sources through a single mediated schema. A key bottleneck in building such systems has been the laborious manual construction of semantic mappings between the source schemas and the mediated schema. We describe LSD, a system that employs and extends current machinelearning techniques to semiautomatically find such mappings. LSD first asks the user to provide the semantic mappings for a small set of data sources, then uses these mappings together with the sources to train a set of learners. Each learner exploits a different type of information either in the source schemas or in their data. Once the learners have been trained, LSD nds semantic mappings for a new data source by applying the learners, then combining their predictions using a metalearner. To further improve matching accuracy, we extend machine learning techniques so that LSD can incorporate domain constraints as an additional source of knowledge, and develop a novel learner that utilizes the structural information in XML documents. Our approach thus is distinguished in that it incorporates multiple types of knowledge. Importantly, its architecture is extensible to additional learners that may exploit new kinds of information. We describe a set of experiments on several realworld domains, and show that LSD proposes semantic mappings with a high degree of accuracy.
Fitness Distance Correlation as a Measure of Problem Difficulty for Genetic Algorithms
 Proceedings of the Sixth International Conference on Genetic Algorithms
, 1995
"... A measure of search difficulty, fitness distance correlation (FDC), is introduced and examined in relation to genetic algorithm (GA) performance. In many cases, this correlation can be used to predict the performance of a GA on problems with known global maxima. It correctly classifies easy deceptiv ..."
Abstract

Cited by 204 (5 self)
 Add to MetaCart
A measure of search difficulty, fitness distance correlation (FDC), is introduced and examined in relation to genetic algorithm (GA) performance. In many cases, this correlation can be used to predict the performance of a GA on problems with known global maxima. It correctly classifies easy deceptive problems as easy and difficult nondeceptive problems as difficult, indicates when Gray coding will prove better than binary coding, and is consistent with the surprises encountered when GAs were used on the Tanese and royal road functions. The FDC measure is a consequence of an investigation into the connection between GAs and heuristic search. 1 INTRODUCTION A correspondence between evolutionary algorithms and heuristic state space search is developed in (Jones, 1995b). This is based on a model of fitness landscapes as directed, labeled graphs that are closely related to the state spaces employed in heuristic search. We examine one aspect of this correspondence, the relationship between...
Generalized bestfirst search strategies and the optimality of A*
 JOURNAL OF THE ACM
, 1985
"... This paper reports several properties of heuristic bestfirst search strategies whose scoring functions f depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A * retain t ..."
Abstract

Cited by 161 (12 self)
 Add to MetaCart
This paper reports several properties of heuristic bestfirst search strategies whose scoring functions f depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A * retain their form (with the minmax offplaying the role of the optimal cost), which helps establish general tests of admissibility and general conditions for node expansion for these strategies. On the basis of this framework the computational optimality of A*, in the sense of never expanding a node that can be skipped by some other algorithm having access to the same heuristic information that A* uses, is examined. A hierarchy of four optimality types is defined and three classes of algorithms and four domains of problem instances are considered. Computational performances relative to these algorithms and domains are appraised. For each classdomain combination, we then identify the strongest type of optimality that exists and the algorithm for achieving it. The main results of this paper relate to the class of algorithms that, like A*, return optimal solutions (i.e., admissible) when all cost estimates are optimistic (i.e., h 5 h*). On this class, A * is shown to be not optimal and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A * is indeed optimal. Additionally, A * is also shown to be optimal over a subset of the latter class containing all bestfirst algorithms that are guided by pathdependent evaluation functions.
The EXODUS Optimizer Generator
, 1987
"... This paper presents the design and an initial performance evaluation of the query optimizer generator designed for the EXODUS extensible database system. Algebraic transformation rules are translated into an executable query optimizer, which transforms query trees and selects methods for executing o ..."
Abstract

Cited by 159 (7 self)
 Add to MetaCart
This paper presents the design and an initial performance evaluation of the query optimizer generator designed for the EXODUS extensible database system. Algebraic transformation rules are translated into an executable query optimizer, which transforms query trees and selects methods for executing operations according to cost functions associated with the methods. The search strategy avoids exhaustive search and it modifies itself to take advantage of past experience. Computational results show that an optimizer generated for a relational system produces access plans almost as good as those produced by exhaustive search, with the search time cut to a small fraction.
Separateandconquer rule learning
 Artificial Intelligence Review
, 1999
"... This paper is a survey of inductive rule learning algorithms that use a separateandconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of ..."
Abstract

Cited by 135 (29 self)
 Add to MetaCart
This paper is a survey of inductive rule learning algorithms that use a separateandconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.
Practical Applications of Constraint Programming
 CONSTRAINTS
, 1996
"... Constraint programming is newly flowering in industry. Several companies have recently started up to exploit the technology, and the number of industrial applications is now growing very quickly. This survey will seek, by examples, ..."
Abstract

Cited by 105 (1 self)
 Add to MetaCart
Constraint programming is newly flowering in industry. Several companies have recently started up to exploit the technology, and the number of industrial applications is now growing very quickly. This survey will seek, by examples,
Disjoint pattern database heuristics
 Artificial Intelligence
, 2002
"... We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases (Culberson & Schaeffer, 1998), which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heurist ..."
Abstract

Cited by 104 (24 self)
 Add to MetaCart
We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases (Culberson & Schaeffer, 1998), which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously (Korf & Felner, 2002) we showed how to statically partition the slidingtile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods and apply both methods to three different problem domains: the slidingtile puzzles, the 4peg Towers of Hanoi problem, and finding an optimal vertex cover of a graph. We find that in some problem domains, static partitioning is most effective, while in others dynamic partitioning is a better choice. In each of these problem domains, either statically partitioned or dynamically partitioned pattern database heuristics are the best known heuristics for the problem.
OPUS: An efficient admissible algorithm for unordered search
 Journal of Artificial Intelligence Research
, 1995
"... OPUS is a branch and bound search algorithm that enables efficient admissible search through spaces for which the order of search operator application is not significant. The algorithm’s search efficiency is demonstrated with respect to very large machine learning search spaces. The use of admissibl ..."
Abstract

Cited by 75 (14 self)
 Add to MetaCart
OPUS is a branch and bound search algorithm that enables efficient admissible search through spaces for which the order of search operator application is not significant. The algorithm’s search efficiency is demonstrated with respect to very large machine learning search spaces. The use of admissible search is of potential value to the machine learning community as it means that the exact learning biases to be employed for complex learning tasks can be precisely specified and manipulated. OPUS also has potential for application in other areas of artificial intelligence, notably, truth maintenance. 1.