Results 1  10
of
92
Dynamic Restart Policies
, 2002
"... We describe theoretical results and empirical study of contextsensitive restart policies for randomized search procedures. ..."
Abstract

Cited by 65 (6 self)
 Add to MetaCart
(Show Context)
We describe theoretical results and empirical study of contextsensitive restart policies for randomized search procedures.
Solving Difficult Instances of Boolean Satisfiability in the Presence of Symmetry
, 2002
"... ..."
(Show Context)
Backbones and backdoors in satisfiability
 In Proceedings of the National Conference on Artificial Intelligence (AAAI
, 2005
"... We study the backbone and the backdoors of propositional satisfiability problems. We make a number of theoretical, algorithmic and experimental contributions. From a theoretical perspective, we prove that backbones are hard even to approximate. From an algorithmic perspective, we present a number of ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
(Show Context)
We study the backbone and the backdoors of propositional satisfiability problems. We make a number of theoretical, algorithmic and experimental contributions. From a theoretical perspective, we prove that backbones are hard even to approximate. From an algorithmic perspective, we present a number of different procedures for computing backdoors. From an empirical perspective, we study the correlation between being in the backbone and in a backdoor. Experiments show that there tends to be very little overlap between backbones and backdoors. We also study problem hardness for the Davis Putnam procedure. Problem hardness appears to be correlated with the size of strong backdoors, and weakly correlated with the size of the backbone, but does not appear to be correlated to the size of weak backdoors nor their number. Finally, to isolate the effect of backdoors, we look at problems with no backbone.
Permutation Problems and Channelling Constraints
 TR 26, APES Group
, 2001
"... When writing a constraint program, we have to decide what to make the decision variable, and how to represent the constraints on these variables. In many cases, there is considerable choice for the decision variables. For example, with permutation problems, we can choose between a primal and a dual ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
When writing a constraint program, we have to decide what to make the decision variable, and how to represent the constraints on these variables. In many cases, there is considerable choice for the decision variables. For example, with permutation problems, we can choose between a primal and a dual representation. In the dual representation, dual variables stand for the primal values, whilst dual values stand for the primal variables. By means of channelling constraints, a combined model can have both primal and dual variables. In this paper, we perform an extensive theoretical and empirical study of these different models. Our results will aid constraint programmers to choose a model for a permutation problem. They also illustrate a general methodology for comparing different constraint models.
Solving NonBoolean Satisfiability Problems with Stochastic Local Search
 in Proc. IJCAI01
, 2001
"... Abstract. Much excitement has been generated by the success of stochastic local search procedures at finding solutions to large, very hard satisfiability problems. Many of the problems on which these procedures have been effective are nonBoolean in that they are most naturally formulated in terms o ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Abstract. Much excitement has been generated by the success of stochastic local search procedures at finding solutions to large, very hard satisfiability problems. Many of the problems on which these procedures have been effective are nonBoolean in that they are most naturally formulated in terms of variables with domain sizes greater than two. Approaches to solving nonBoolean satisfiability problems fall into two categories. In the direct approach, the problem is tackled by an algorithm for nonBoolean problems. In the transformation approach, the nonBoolean problem is reformulated as an equivalent Boolean problem and then a Boolean solver is used. This paper compares four methods for solving nonBoolean problems: one direct and three transformational. The comparison first examines the search spaces confronted by the four methods then tests their ability to solve random formulas, the roundrobin sports scheduling problem and the quasigroup completion problem. The experiments show that the relative performance of the methods depends on the domain size of the problem, and that the direct method scales better as domain size increases. Along the route to performing these comparisons we make three other contributions. First, we generalise Walksat, a highlysuccessful stochastic local search procedure for Boolean satisfiability problems, to work on problems with domains of any finite size. Second, we introduce a new method for transforming nonBoolean problems to Boolean problems and improve on an existing transformation. Third, we identify sufficient conditions for omitting atleastone and atmostone clauses from a transformed formula. Fourth, for use in our experiments we propose a model for generating random formulas that vary in domain size but are similar in other respects.
Backbones in Optimization and Approximation
 IN IJCAI01
, 2001
"... We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and tr ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and traveling salesperson problems, problem hardness is weakly and negatively correlated with backbone size, while the cost of finding optimal and approximate solutions is positively correlated with backbone size. A third class of optimization problems like number partitioning have regions of both types of behavior. We find that to observe the impact of backbone size on problem hardness, it is necessary to eliminate some symmetries, perform trivial reductions and factor out the effective problem size.
SATbased planning in complex domains: Concurrency, constraints and nondeterminism
 ARTIFICIAL INTELLIGENCE
, 2003
"... Planning as satisfiability is a very efficient technique for classical planning, i.e., for planning domains in which both the effects of actions and the initial state are completely specified. In this paper we present CSAT, a SATbased procedure capable of dealing with planning domains having incom ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
(Show Context)
Planning as satisfiability is a very efficient technique for classical planning, i.e., for planning domains in which both the effects of actions and the initial state are completely specified. In this paper we present CSAT, a SATbased procedure capable of dealing with planning domains having incomplete information about the initial state, and whose underlying transition system is specified using the highly expressive action language C. Thus, CSAT allows for planning in domains involving (i) actions which can be executed concurrently; (ii) (ramification and qualification) constraints affecting the effects of actions; and (iii) nondeterminism in the initial state and in the effects of actions. We first prove the correctness and the completeness of CSAT, discuss some optimizations, and then we present CPLAN, a system based on CSAT. CPLAN works on any C planning problem, but some optimizations have not been fully implemented yet. Nevertheless, the experimental analysis shows that SATbased approaches to planning with incomplete information are viable, at least in the case of problems with a high degree of parallelism.
Empirical Hardness Models: Methodology and a Case Study on Combinatorial Auctions
"... Is it possible to predict how long an algorithm will take to solve a previouslyunseen instance of an NPcomplete problem? If so, what uses can be found for models that make such predictions? This paper provides answers to these questions and evaluates the answers experimentally. We propose the use ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
(Show Context)
Is it possible to predict how long an algorithm will take to solve a previouslyunseen instance of an NPcomplete problem? If so, what uses can be found for models that make such predictions? This paper provides answers to these questions and evaluates the answers experimentally. We propose the use of supervised machine learning to build models that predict an algorithm’s runtime given a problem instance. We discuss the construction of these models and describe techniques for interpreting them to gain understanding of the characteristics that cause instances to be hard or easy. We also present two applications of our models: building algorithm portfolios that outperform their constituent algorithms, and generating test distributions that emphasize hard problems. We demonstrate the effectiveness of our techniques in a case study of the combinatorial auction winner determination problem. Our experimental results show that we can build very accurate models of an algorithm’s running time, interpret our models, build an algorithm portfolio that strongly outperforms the best single algorithm, and tune a standard benchmark suite to generate much harder problem instances.
A simple model to generate hard satisfiable instances
 In Proceedings of IJCAI’05
, 2005
"... In this paper, we try to further demonstrate that the models of random CSP instances proposed by [Xu and Li, 2000; 2003] are of theoretical and practical interest. Indeed, these models, called RB and RD, present several nice features. First, it is quite easy to generate random instances of any arity ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
In this paper, we try to further demonstrate that the models of random CSP instances proposed by [Xu and Li, 2000; 2003] are of theoretical and practical interest. Indeed, these models, called RB and RD, present several nice features. First, it is quite easy to generate random instances of any arity since no particular structure has to be integrated, or property enforced, in such instances. Then, the existence of an asymptotic phase transition can be guaranteed while applying a limited restriction on domain size and on constraint tightness. In that case, a threshold point can be precisely located and all instances have the guarantee to be hard at the threshold, i.e., to have an exponential treeresolution complexity. Next, a formal analysis shows that it is possible to generate forced satisfiable instances whose hardness is similar to unforced satisfiable ones. This analysis is supported by some representative results taken from an intensive experimentation that we have carried out, using complete and incomplete search methods. 1