Results 1  10
of
134
The Constrainedness of Search
 In Proceedings of AAAI96
, 1999
"... We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrain ..."
Abstract

Cited by 126 (29 self)
 Add to MetaCart
(Show Context)
We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition of constrainedness generalizes a number of parameters used to study phase transition behaviour in a wide variety of problem domains. As well as predicting the location of phase transitions in solubility, constrainedness provides insight into why problems at phase transitions tend to be hard to solve. Such problems are on a constrainedness "knifeedge", and we must search deep into the problem before they look more or less soluble. Heuristics that try to get off this knifeedge as quickly as possible by, for example, minimizing the constrainedness are often very effective. We show that heuristics from a wide variety of problem domains can be seen as minimizing the constrainedness (or proxies ...
An Empirical Study of Dynamic Variable Ordering Heuristics for the Constraint Satisfaction Problem
 In Proceedings of CP96
, 1996
"... . The constraint satisfaction community has developed a number of heuristics for variable ordering during backtracking search. For example, in conjunction with algorithms which check forwards, the FailFirst (FF) and Brelaz (Bz) heuristics are cheap to evaluate and are generally considered to be ver ..."
Abstract

Cited by 86 (15 self)
 Add to MetaCart
(Show Context)
. The constraint satisfaction community has developed a number of heuristics for variable ordering during backtracking search. For example, in conjunction with algorithms which check forwards, the FailFirst (FF) and Brelaz (Bz) heuristics are cheap to evaluate and are generally considered to be very effective. Recent work to understand phase transitions in NPcomplete problem classes enables us to compare such heuristics over a large range of different kinds of problems. Furthermore, we are now able to start to understand the reasons for the success, and therefore also the failure, of heuristics, and to introduce new heuristics which achieve the successes and avoid the failures. In this paper, we present a comparison of the Bz and FF heuristics in forward checking algorithms applied to randomlygenerated binary CSP's. We also introduce new and very general heuristics and present an extensive study of these. These new heuristics are usually as good as or better than Bz and FF, and we id...
Random Constraint Satisfaction: A More Accurate Picture
, 1997
"... Recently there has been a great amount of interest in Random Constraint Satisfaction Problems, both from an experimental and a theoretical point of view. Rather intruigingly, experimental results with various models for generating random CSP instances suggest a "thresholdlike" behaviou ..."
Abstract

Cited by 85 (7 self)
 Add to MetaCart
Recently there has been a great amount of interest in Random Constraint Satisfaction Problems, both from an experimental and a theoretical point of view. Rather intruigingly, experimental results with various models for generating random CSP instances suggest a "thresholdlike" behaviour and some theoretical work has been done in analyzing these models when the number of variables is asymptotic. In this paper we show that the models commonly used for generating random CSP instances suffer from a wrong parameterization which makes them unsuitable for asymptotic analysis. In particular, when the number of variables becomes large almost all instances they generate are, trivially, overconstrained. We then present a new model that is suitable for asymptotic analysis and, in the spirit of random SAT, we derive lower and upper bounds for its parameters so that the instances generated are "almost surely" over and underconstrained, respectively. Finally, we apply the technique introduced in [19], to one of the popular models in Artificial Intelligence and derive sharper estimates for the probability of being overconstrained as a function of the number of variables. 1
Problem Structure in the Presence of Perturbations
 In Proceedings of the 14th National Conference on AI
, 1997
"... Recent progress on search and reasoning procedures has been driven by experimentation on computationally hard problem instances. Hard random problem distributions are an important source of such instances. Challenge problems from the area of finite algebra have also stimulated research on searc ..."
Abstract

Cited by 81 (22 self)
 Add to MetaCart
Recent progress on search and reasoning procedures has been driven by experimentation on computationally hard problem instances. Hard random problem distributions are an important source of such instances. Challenge problems from the area of finite algebra have also stimulated research on search and reasoning procedures. Nevertheless, the relation of such problems to practical applications is somewhat unclear. Realistic problem instances clearly have more structure than the random problem instances, but, on the other hand, they are not as regular as the structured mathematical problems. We propose a new benchmark domain that bridges the gap between the purely random instances and the highly structured problems, by introducing perturbations into a structured domain. We will show how to obtain interesting search problems in this manner, and how such problems can be used to study the robustness of search control mechanisms. Our experiments demonstrate that the performan...
Random constraint satisfaction: Flaws and structure
 Constraints
, 2001
"... 4, and Toby Walsh 5 ..."
HeavyTailed Distributions in Combinatorial Search
, 1997
"... Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or "heavy tails". W ..."
Abstract

Cited by 76 (14 self)
 Add to MetaCart
Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or "heavy tails". We will show that these distributions are best characterized by a general class of distributions that have no moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We believe this is the first finding of these distributions in a purely computational setting. We also show how random restarts can effectively eliminate heavytailed behavior, thereby dramatically improving the overall performance of a search procedure.
Evaluating Las Vegas algorithms  pitfalls and remedies
 IN PROCEEDINGS OF THE FOURTEENTH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI98
, 1998
"... Stochastic search algorithms are among the most sucessful approaches for solving hard combinatorial problems. A large class of stochastic search approaches can be cast into the framework of Las Vegas Algorithms (LVAs). As the runtime behavior of LVAs is characterized by random variables, the detail ..."
Abstract

Cited by 66 (21 self)
 Add to MetaCart
Stochastic search algorithms are among the most sucessful approaches for solving hard combinatorial problems. A large class of stochastic search approaches can be cast into the framework of Las Vegas Algorithms (LVAs). As the runtime behavior of LVAs is characterized by random variables, the detailed knowledge of runtime distributions provides important information for the analysis of these algorithms. In this paper we propose a novel methodology for evaluating the performance of LVAs, based on the identification of empirical runtime distributions. We exemplify our approach by applying it to Stochastic Local Search (SLS) algorithms for the satisfiability problem (SAT) in propositional logic. We point out pitfalls arising from the use of improper empirical methods and discuss the benefits of the proposed methodology for evaluating and comparing LVAs.
Resolution versus Search: Two Strategies for SAT
 Journal of Automated Reasoning
, 2000
"... The paper compares two popular strategies for solving propositional satisfiability, backtracking search and resolution, and analyzes the complexity of a directional resolution algorithm (DR) as a function of the "width" (w) of the problem's graph. ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
(Show Context)
The paper compares two popular strategies for solving propositional satisfiability, backtracking search and resolution, and analyzes the complexity of a directional resolution algorithm (DR) as a function of the "width" (w) of the problem's graph.
Beyond NP: the QSAT phase transition
, 1999
"... We show that phase transition behavior similar to that observed in NPcomplete problems like random 3Sat occurs further up the polynomial hierarchy in problems like random 2Qsat. The differences between Qsat and Sat in phase transition behavior that Cadoli et al report are largely due to the ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
(Show Context)
We show that phase transition behavior similar to that observed in NPcomplete problems like random 3Sat occurs further up the polynomial hierarchy in problems like random 2Qsat. The differences between Qsat and Sat in phase transition behavior that Cadoli et al report are largely due to the presence of trivially unsatisfiable problems. Once they are removed, we see behavior more familiar from Sat and other NPcomplete domains. There are, however, some differences. Problems with short clauses show a large gap between worst case behavior and median, and the easyhardeasy pattern is restricted to higher percentiles of search cost. We compute
The Constrainedness of Arc Consistency
 in Proceedings of CP97
, 1997
"... . We show that the same methodology used to study phase transition behaviour in NPcomplete problems works with a polynomial problem class: establishing arc consistency. A general measure of the constrainedness of an ensemble of problems, used to locate phase transitions in random NPcomplete proble ..."
Abstract

Cited by 50 (10 self)
 Add to MetaCart
(Show Context)
. We show that the same methodology used to study phase transition behaviour in NPcomplete problems works with a polynomial problem class: establishing arc consistency. A general measure of the constrainedness of an ensemble of problems, used to locate phase transitions in random NPcomplete problems, predicts the location of a phase transition in establishing arc consistency. A complexity peak for the AC3 algorithm is associated with this transition. Finite size scaling models both the scaling of this transition and the computational cost. On problems at the phase transition, this model of computational cost agrees with the theoretical worst case. As with NPcomplete problems, constrainedness  and proxies for it which are cheaper to compute  can be used as a heuristic for reducing the number of checks needed to establish arc consistency in AC3. 1 Introduction Following [4] there has been considerable research into phase transition behaviour in NPcomplete problems. Problems from...