Results 1  10
of
14
Learning the Empirical Hardness of Optimization Problems: The case of combinatorial auctions
 In CP
, 2002
"... We propose a new approach to understanding the algorithmspecific empirical hardness of optimization problems. In this work we focus on the empirical hardness of the winner determination probleman optimization problem arising in combinatorial auctionswhen solved by ILOG's CPLEX software. We co ..."
Abstract

Cited by 59 (20 self)
 Add to MetaCart
We propose a new approach to understanding the algorithmspecific empirical hardness of optimization problems. In this work we focus on the empirical hardness of the winner determination probleman optimization problem arising in combinatorial auctionswhen solved by ILOG's CPLEX software. We consider nine widelyused problem distributions and sample randomly from a continuum of parameter settings for each distribution. First, we contrast the overall empirical hardness of the different distributions. Second, we identify a large number of distributionnonspecific features of data instances and use statistical regression techniques to learn, evaluate and interpret a function from these features to the predicted hardness of an instance.
Understanding Random SAT: Beyond the ClausestoVariables Ratio
 In Proc. of CP04
"... It is well known that the ratio of the number of clauses to the number of variables in a random kSAT instance is highly correlated with the instance's empirical hardness. We consider the problem of identifying such features of random SAT instances automatically with machine learning. We describe ..."
Abstract

Cited by 44 (17 self)
 Add to MetaCart
It is well known that the ratio of the number of clauses to the number of variables in a random kSAT instance is highly correlated with the instance's empirical hardness. We consider the problem of identifying such features of random SAT instances automatically with machine learning. We describe and analyze models for three SAT solverskcnfs, oksolver and satzand for two different distributions of instances: uniform random 3SAT with varying ratio of clausestovariables, and uniform random 3SAT with fixed ratio of clausestovariables.
Backbones in Optimization and Approximation
 IN IJCAI01
, 2001
"... We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and tr ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and traveling salesperson problems, problem hardness is weakly and negatively correlated with backbone size, while the cost of finding optimal and approximate solutions is positively correlated with backbone size. A third class of optimization problems like number partitioning have regions of both types of behavior. We find that to observe the impact of backbone size on problem hardness, it is necessary to eliminate some symmetries, perform trivial reductions and factor out the effective problem size.
Backbones and backdoors in satisfiability
 In Proceedings of the National Conference on Artificial Intelligence (AAAI
, 2005
"... We study the backbone and the backdoors of propositional satisfiability problems. We make a number of theoretical, algorithmic and experimental contributions. From a theoretical perspective, we prove that backbones are hard even to approximate. From an algorithmic perspective, we present a number of ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
We study the backbone and the backdoors of propositional satisfiability problems. We make a number of theoretical, algorithmic and experimental contributions. From a theoretical perspective, we prove that backbones are hard even to approximate. From an algorithmic perspective, we present a number of different procedures for computing backdoors. From an empirical perspective, we study the correlation between being in the backbone and in a backdoor. Experiments show that there tends to be very little overlap between backbones and backdoors. We also study problem hardness for the Davis Putnam procedure. Problem hardness appears to be correlated with the size of strong backdoors, and weakly correlated with the size of the backbone, but does not appear to be correlated to the size of weak backdoors nor their number. Finally, to isolate the effect of backdoors, we look at problems with no backbone.
Problem Difficulty for Tabu Search in JobShop Scheduling
 Artificial Intelligence
, 2002
"... Tabu search algorithms are among the most effective approaches for solving the jobshop scheduling problem (JSP). Yet, we have little understanding of why these algorithms work so well, and under what conditions. We develop a model of problem difficulty for tabu search in the JSP, borrowing from sim ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Tabu search algorithms are among the most effective approaches for solving the jobshop scheduling problem (JSP). Yet, we have little understanding of why these algorithms work so well, and under what conditions. We develop a model of problem difficulty for tabu search in the JSP, borrowing from similar models developed for SAT and other NP  complete problems. We show that the mean distance between random local optima and the nearest optimal solution is highly correlated with the cost of locating optimal solutions to typical, random JSPs. Additionally, this model accounts for the cost of locating suboptimal solutions, and provides an explanation for differences in the relative difficulty of square versus rectangular JSPs. We also identify two important limitations of our model. First, model accuracy is inversely correlated with problem difficulty, and is exceptionally poor for rare, very highcost problem instances. Second, the model is significantly less accurate for structured, nonrandom JSPs. Our results are also likely to be useful in future research on difficulty models of local search in SAT, as local search cost in both SAT and the JSP is largely dictated by the same search space features. Similarly, our research represents the first attempt to quantitatively model the cost of tabu search for any NP complete problem, and may possibly be leveraged in an effort to understand tabu search in problems other than jobshop scheduling.
Search on High Degree Graphs
 IN 17TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2001
"... We show that nodes of high degree tend to occur infrequently in random graphs but frequently in a wide variety of graphs associated with real world search problems. We then study some alternative models for randomly generating graphs which have been proposed to give more realistic topologies. ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We show that nodes of high degree tend to occur infrequently in random graphs but frequently in a wide variety of graphs associated with real world search problems. We then study some alternative models for randomly generating graphs which have been proposed to give more realistic topologies. For example, we show that Watts and Strogatz 's small world model has a narrow distribution of node degree. On the other hand, Barabasi and Albert's power law model, gives graphs with both nodes of high degree and a small world topology. These graphs may therefore be useful for benchmarking. We then measure the impact of nodes of high degree and a small world topology on the cost of coloring graphs. The long tail in search costs observed with small world graphs disappears when these graphs are also constructed to contain nodes of high degree. We conjecture that this is a result of the small size of their "backbone", pairs of edges that are frozen to be the same color.
Empirical Hardness Models: Methodology and a Case Study on Combinatorial Auctions
"... Is it possible to predict how long an algorithm will take to solve a previouslyunseen instance of an NPcomplete problem? If so, what uses can be found for models that make such predictions? This paper provides answers to these questions and evaluates the answers experimentally. We propose the use ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Is it possible to predict how long an algorithm will take to solve a previouslyunseen instance of an NPcomplete problem? If so, what uses can be found for models that make such predictions? This paper provides answers to these questions and evaluates the answers experimentally. We propose the use of supervised machine learning to build models that predict an algorithm’s runtime given a problem instance. We discuss the construction of these models and describe techniques for interpreting them to gain understanding of the characteristics that cause instances to be hard or easy. We also present two applications of our models: building algorithm portfolios that outperform their constituent algorithms, and generating test distributions that emphasize hard problems. We demonstrate the effectiveness of our techniques in a case study of the combinatorial auction winner determination problem. Our experimental results show that we can build very accurate models of an algorithm’s running time, interpret our models, build an algorithm portfolio that strongly outperforms the best single algorithm, and tune a standard benchmark suite to generate much harder problem instances.
Identifying and Exploiting Problem Structures Using Explanationbased Constraint Programming
 Constraints
"... Abstract. Identifying structures in a given combinatorial problem is often a key step for designing efficient search heuristics or for understanding the inherent complexity of the problem. Several Operations Research approaches apply decomposition or relaxation strategies upon such a structure ident ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. Identifying structures in a given combinatorial problem is often a key step for designing efficient search heuristics or for understanding the inherent complexity of the problem. Several Operations Research approaches apply decomposition or relaxation strategies upon such a structure identified within a given problem. The next step is to design algorithms that adaptively integrate that kind of information during search. We claim in this paper, inspired by previous work on impactbased search strategies for constraint programming, that using an explanationbased constraint solver may lead to collect invaluable information on the intimate dynamically revealed and static structures of a problem instance. Moreover, we discuss how dedicated OR solving strategies (such as Benders decomposition) could be adapted to constraint programming when specific relationships between variables are exhibited. 1.
T.: The backbone of the travelling salesperson
 In: Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI’05
"... We study the backbone of the travelling salesperson optimization problem. We prove that it is intractable to approximate the backbone with any performance guarantee, assuming that P�=NP and there is a limit on the number of edges falsely returned. Nevertheless, in practice, it appears that much of t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We study the backbone of the travelling salesperson optimization problem. We prove that it is intractable to approximate the backbone with any performance guarantee, assuming that P�=NP and there is a limit on the number of edges falsely returned. Nevertheless, in practice, it appears that much of the backbone is present in close to optimal solutions. We can therefore often find much of the backbone using approximation methods based on good heuristics. We demonstrate that such backbone information can be used to guide the search for an optimal solution. However, the variance in runtimes when using a backbone guided heuristic is large. This suggests that we may need to combine such heuristics with randomization and restarts. In addition, though backbone guided heuristics are useful for finding optimal solutions, they are less help in proving optimality. 1
Search on High Degree Graphs
 In 17th International Joint Conference on Artificial Intelligence
, 2001
"... We show that nodes of high degree tend to occur infrequently in random graphs but frequently in a wide variety of graphs associated with real world search problems. We then study some alternative models for randomly generating graphs which have been proposed to give more realistic topologies. ..."
Abstract
 Add to MetaCart
We show that nodes of high degree tend to occur infrequently in random graphs but frequently in a wide variety of graphs associated with real world search problems. We then study some alternative models for randomly generating graphs which have been proposed to give more realistic topologies. For example, we show that Watts and Strogatz 's small world model has a narrow distribution of node degree. On the other hand, Barabasi and Albert's power law model, gives graphs with both nodes of high degree and a small world topology. These graphs may therefore be useful for benchmarking. We then measure the impact of nodes of high degree and a small world topology on the cost of coloring graphs. The long tail in search costs observed with small world graphs disappears when these graphs are also constructed to contain nodes of high degree. We conjecture that this is a result of the small size of their "backbone", pairs of edges that are frozen to be the same color. 1