Results 1  10
of
28
Learning the Empirical Hardness of Optimization Problems: The case of combinatorial auctions
 In CP
, 2002
"... We propose a new approach to understanding the algorithmspecific empirical hardness of optimization problems. In this work we focus on the empirical hardness of the winner determination probleman optimization problem arising in combinatorial auctionswhen solved by ILOG's CPLEX software. We co ..."
Abstract

Cited by 59 (20 self)
 Add to MetaCart
We propose a new approach to understanding the algorithmspecific empirical hardness of optimization problems. In this work we focus on the empirical hardness of the winner determination probleman optimization problem arising in combinatorial auctionswhen solved by ILOG's CPLEX software. We consider nine widelyused problem distributions and sample randomly from a continuum of parameter settings for each distribution. First, we contrast the overall empirical hardness of the different distributions. Second, we identify a large number of distributionnonspecific features of data instances and use statistical regression techniques to learn, evaluate and interpret a function from these features to the predicted hardness of an instance.
Exhibiting Knowledge in Planning Problems to Minimize State Encoding Length
 In ECP
, 1999
"... In this paper we present a generalpurposed algorithm for transforming a planning problem specified in Strips into a concise state description for single state or symbolic exploration. The process of finding a state description consists of four phases. In the first phase we symbolically analyze the ..."
Abstract

Cited by 48 (15 self)
 Add to MetaCart
In this paper we present a generalpurposed algorithm for transforming a planning problem specified in Strips into a concise state description for single state or symbolic exploration. The process of finding a state description consists of four phases. In the first phase we symbolically analyze the domain specification to determine constant and oneway predicates, i.e. predicates that remain unchanged by all operators or toggle in only one direction, respectively. In the second phase we symbolically merge predicates which lead to a drastic reduction of state encoding size, while in the third phase we constrain the domains of the predicates to be considered by enumerating the operators of the planning problem. The fourth phase combines the result of the previous phases.
Recent progress in the design and analysis of admissible heuristic functions
 Proc. AAAI2000
, 2000
"... Abstract. In the past several years, significant progress has been made in finding optimal solutions to combinatorial problems. In particular, random instances of both Rubik’s Cube, with over 10 19 states, andthe 5×5 slidingtile puzzle, with almost 10 25 states, have been solvedoptimally. This prog ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Abstract. In the past several years, significant progress has been made in finding optimal solutions to combinatorial problems. In particular, random instances of both Rubik’s Cube, with over 10 19 states, andthe 5×5 slidingtile puzzle, with almost 10 25 states, have been solvedoptimally. This progress is not the result of better search algorithms, but more effective heuristic evaluation functions. In addition, we have learned how to accurately predict the running time of admissible heuristic search algorithms, as a function of the solution depth and the heuristic evaluation function. One corollary of this analysis is that an admissible heuristic function reduces the effective depth of search, rather than the effective branching factor. 1
Empirical Hardness Models: Methodology and a Case Study on Combinatorial Auctions
"... Is it possible to predict how long an algorithm will take to solve a previouslyunseen instance of an NPcomplete problem? If so, what uses can be found for models that make such predictions? This paper provides answers to these questions and evaluates the answers experimentally. We propose the use ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Is it possible to predict how long an algorithm will take to solve a previouslyunseen instance of an NPcomplete problem? If so, what uses can be found for models that make such predictions? This paper provides answers to these questions and evaluates the answers experimentally. We propose the use of supervised machine learning to build models that predict an algorithm’s runtime given a problem instance. We discuss the construction of these models and describe techniques for interpreting them to gain understanding of the characteristics that cause instances to be hard or easy. We also present two applications of our models: building algorithm portfolios that outperform their constituent algorithms, and generating test distributions that emphasize hard problems. We demonstrate the effectiveness of our techniques in a case study of the combinatorial auction winner determination problem. Our experimental results show that we can build very accurate models of an algorithm’s running time, interpret our models, build an algorithm portfolio that strongly outperforms the best single algorithm, and tune a standard benchmark suite to generate much harder problem instances.
PSVN: A Vector Representation for Production Systems
, 1999
"... In this paper we present a production system which acts on fixed length vectors of labels. Our goal is to automatically generate heuristics to search the state space for shortest paths between states efficiently. The heuristic values which guide search in the state space are obtained by searching fo ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
In this paper we present a production system which acts on fixed length vectors of labels. Our goal is to automatically generate heuristics to search the state space for shortest paths between states efficiently. The heuristic values which guide search in the state space are obtained by searching for the shortest path in an abstract space derived from the definition of the original space. In PSVN, a state is a fixed length vector of labels and abstractions are generated by simply mapping the set of labels to another smaller set of labels (domain abstraction). A domain abstraction on labels induces space preserves important properties of the original space while usually being significantly smaller in size. It is guaranteed that the shortest path between two states in the original space is at least as long as the shortest path between their images in the abstract space. Hence, such abstractions provide admissible heuristics for search algorithms such as A* and IDA*. The mapping of states and operators can be efficiently obtained by applying the domain map on the labels. We explore important properties of state spaces defined in PSVN and abstractions generated by domain maps. Despite its simplicity, PSVN is capable to define all finitely generated permutation groups and such benchmark problems as Rubik's Cube, the slidingtile puzzles and the Blocks World.
Experiments with automatically created memorybased heuristics
 PROCEEDINGS OF THE SYMPOSIUM ON ABSTRACTION, REFORMULATION AND APPROXIMATION, IN: LECTURE NOTES IN ARTIFICIAL INTELLIGENCE
, 2000
"... A memorybased heuristic is a function, h(s), stored in the form of a lookup table: h(s) is computed by mapping s to an index and then retrieving the corresponding entry in the table. In this paper we present a notation for describing state spaces, PSVN, and a method for automatically creating mem ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
A memorybased heuristic is a function, h(s), stored in the form of a lookup table: h(s) is computed by mapping s to an index and then retrieving the corresponding entry in the table. In this paper we present a notation for describing state spaces, PSVN, and a method for automatically creating memorybased heuristics for a state space by abstracting its PSVN description. Two investigations of these automatically generated heuristics are presented. First, thousands of automatically generated heuristics are used to experimentally investigate the conjecture by Korf [4] that m t is a constant, where m is the size of a heuristic's lookup table and t is the number of nodes expanded when the heuristic is used to guide search. Second, a similar largescale experiment isused to verify that the Korf and Reid's complexity analysis [5] can be used to rapidly and reliably choose the best among a given set of heuristics.
MemoryEfficient A* Heuristics for Multiple Sequence Alignment
 IN PROCEEDINGS OF THE 18TH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI02
, 2002
"... The time and space needs of an A* search are strongly influenced by the quality of the heuristic evaluation function. Usually there is a ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The time and space needs of an A* search are strongly influenced by the quality of the heuristic evaluation function. Usually there is a
A SpaceTime Tradeoff for MemoryBased Heuristics
 Proceedings of the Sixteenth National Conference on Artificial Intelligence (AAAI99
, 1999
"... A memorybased heuristic is a function, h(s), stored in the form of a lookup table (pattern database): h(s) is computed by mapping s to an index and then retrieving the appropriate entry in the table. (Korf 1997) conjectures for search using memorybased heuristics that m \Delta t is a constant ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A memorybased heuristic is a function, h(s), stored in the form of a lookup table (pattern database): h(s) is computed by mapping s to an index and then retrieving the appropriate entry in the table. (Korf 1997) conjectures for search using memorybased heuristics that m \Delta t is a constant, where m is the size of the heuristic's lookup table and t is search time. In this paper we present a method for automatically generating memorybased heuristics and use this to test Korf's conjecture in a largescale experiment. Our results confirm that there is a direct relationship between m and t. Introduction A heuristic is a function, h(s), that computes an estimate of the distance from state s to a goal state. In a memorybased heuristic this computation consists of mapping s to an index which is then used to look up h(s) in a table. Even heuristics that have a normal functional definition are often precomputed and stored in a lookup table in order to speed up search ((Priedi...
Predicting the Performance of IDA* using Conditional Distributions
, 2010
"... Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formula’s predictions are accurate only at levels of the bruteforce search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*’s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations.
Efficient Memory Bound Puzzles Using Pattern Databased
 In Proceedings of te 4 th Applied Cryptography and Network Security Conference (ACNS
, 2006
"... Abstract. CPU bound client puzzles have been suggested as a defense mechanism against connection depletion attacks. However, the wide disparity in CPU speeds prevents such puzzles from being globally deployed. Recently, Abadi et. al. [1] and Dwork et. al. [2] addressed this limitation by showing tha ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. CPU bound client puzzles have been suggested as a defense mechanism against connection depletion attacks. However, the wide disparity in CPU speeds prevents such puzzles from being globally deployed. Recently, Abadi et. al. [1] and Dwork et. al. [2] addressed this limitation by showing that memory access times vary much less than CPU speeds, and hence offer a viable alternative. In this paper, we further investigate the applicability of memory bound puzzles from a new perspective and propose constructions based on heuristic search methods. Our constructions are derived from a more algorithmic foundation, and as a result, allow us to easily tune parameters that impact puzzle creation and verification costs. Moreover, unlike prior approaches, we address clientside cost and present an extension that allows memory constrained clients (e.g., PDAs) to implement our construction in a secure fashion. 1