Results 1  10
of
84
Using Constraint Programming and Local Search Methods to Solve Vehicle Routing Problems
, 1998
"... We use a local search method we term Large Neighbourhood Search (LNS) for solving vehicle routing problems. LNS meshes well with constraint programming technology and is analogous to the shuffling technique of jobshop scheduling. The technique explores a large neighbourhood of the current solution ..."
Abstract

Cited by 139 (2 self)
 Add to MetaCart
We use a local search method we term Large Neighbourhood Search (LNS) for solving vehicle routing problems. LNS meshes well with constraint programming technology and is analogous to the shuffling technique of jobshop scheduling. The technique explores a large neighbourhood of the current solution by selecting a number of customer visits to remove from the routing plan, and reinserting these visits using a constraintbased tree search. We analyse the performance of LNS on a number of vehicle routing benchmark problems. Unlike related methods, we use Limited Discrepancy Search during the tree search to reinsert visits. We also maintain diversity during search by dynamically altering the number of visits to be removed, and by using a randomised choice method for selecting visits to remove. We analyse the performance of our method for various parameter settings controlling the discrepancy limit, the dynamicity of the size of the removal set, and the randomness of the choice. We demonst...
An Empirical Study of Dynamic Variable Ordering Heuristics for the Constraint Satisfaction Problem
 In Proceedings of CP96
, 1996
"... . The constraint satisfaction community has developed a number of heuristics for variable ordering during backtracking search. For example, in conjunction with algorithms which check forwards, the FailFirst (FF) and Brelaz (Bz) heuristics are cheap to evaluate and are generally considered to be ver ..."
Abstract

Cited by 69 (15 self)
 Add to MetaCart
. The constraint satisfaction community has developed a number of heuristics for variable ordering during backtracking search. For example, in conjunction with algorithms which check forwards, the FailFirst (FF) and Brelaz (Bz) heuristics are cheap to evaluate and are generally considered to be very effective. Recent work to understand phase transitions in NPcomplete problem classes enables us to compare such heuristics over a large range of different kinds of problems. Furthermore, we are now able to start to understand the reasons for the success, and therefore also the failure, of heuristics, and to introduce new heuristics which achieve the successes and avoid the failures. In this paper, we present a comparison of the Bz and FF heuristics in forward checking algorithms applied to randomlygenerated binary CSP's. We also introduce new and very general heuristics and present an extensive study of these. These new heuristics are usually as good as or better than Bz and FF, and we id...
Random constraint satisfaction: Flaws and structure
 Constraints
, 2001
"... 4, and Toby Walsh 5 ..."
Constraint propagation
 Handbook of Constraint Programming
, 2006
"... Constraint propagation is a form of inference, not search, and as such is more ”satisfying”, both technically and aesthetically. —E.C. Freuder, 2005. Constraint reasoning involves various types of techniques to tackle the inherent ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
Constraint propagation is a form of inference, not search, and as such is more ”satisfying”, both technically and aesthetically. —E.C. Freuder, 2005. Constraint reasoning involves various types of techniques to tackle the inherent
Trying Harder to Fail First
 In: Thirteenth European Conference on Artificial Intelligence (ECAI 98
, 1997
"... Variable ordering heuristics can have a profound effect on the performance of backtracking search algorithms for constraint satisfaction problems. The smallestremainingdomain heuristic is a commonlyused dynamic variable ordering heuristic, used in conjunction with algorithms such as forward checki ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
Variable ordering heuristics can have a profound effect on the performance of backtracking search algorithms for constraint satisfaction problems. The smallestremainingdomain heuristic is a commonlyused dynamic variable ordering heuristic, used in conjunction with algorithms such as forward checking which look ahead at the effects of each variable instantiation on those variables not yet instantiated. This heuristic has been explained as an implementation of the failfirst principle, stated by Haralick and Elliott [7], i.e. that the next variable selected should be the one which is most likely to result in an immediate failure. We calculate the probability that a variable will fail when using the forward checking algorithm to solve a class of binary CSPs. We derive a series of heuristics, starting with smallestremainingdomain, based on increasingly accurate estimates of this probability, and predict that if the failfirst principle is sound, the more accurate the estimate the better...
Efficient methods for qualitative spatial reasoning
 Proceedings of the 13th European Conference on Artificial Intelligence
, 1998
"... The theoretical properties of qualitative spatial reasoning in the RCC8 framework have been analyzed extensively. However, no empirical investigation has been made yet. Our experiments show that the adaption of the algorithms used for qualitative temporal reasoning can solve large RCC8 instances, ..."
Abstract

Cited by 47 (14 self)
 Add to MetaCart
The theoretical properties of qualitative spatial reasoning in the RCC8 framework have been analyzed extensively. However, no empirical investigation has been made yet. Our experiments show that the adaption of the algorithms used for qualitative temporal reasoning can solve large RCC8 instances, even if they are in the phase transition region  provided that one uses the maximal tractable subsets of RCC8 that have been identified by us. In particular, we demonstrate that the orthogonal combination of heuristic methods is successful in solving almost all apparently hard instances in the phase transition region up to a certain size in reasonable time.
The Constrainedness of Arc Consistency
 in Proceedings of CP97
, 1997
"... . We show that the same methodology used to study phase transition behaviour in NPcomplete problems works with a polynomial problem class: establishing arc consistency. A general measure of the constrainedness of an ensemble of problems, used to locate phase transitions in random NPcomplete proble ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
. We show that the same methodology used to study phase transition behaviour in NPcomplete problems works with a polynomial problem class: establishing arc consistency. A general measure of the constrainedness of an ensemble of problems, used to locate phase transitions in random NPcomplete problems, predicts the location of a phase transition in establishing arc consistency. A complexity peak for the AC3 algorithm is associated with this transition. Finite size scaling models both the scaling of this transition and the computational cost. On problems at the phase transition, this model of computational cost agrees with the theoretical worst case. As with NPcomplete problems, constrainedness  and proxies for it which are cheaper to compute  can be used as a heuristic for reducing the number of checks needed to establish arc consistency in AC3. 1 Introduction Following [4] there has been considerable research into phase transition behaviour in NPcomplete problems. Problems from...
Local Search and the Number of Solutions
, 1996
"... . There has been considerable research interest into the solubility phase transition, and its effect on search cost for backtracking algorithms. In this paper we show that a similar easyhardeasy pattern occurs for local search, with search cost peaking at the phase transition. This is despite prob ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
. There has been considerable research interest into the solubility phase transition, and its effect on search cost for backtracking algorithms. In this paper we show that a similar easyhardeasy pattern occurs for local search, with search cost peaking at the phase transition. This is despite problems beyond the phase transition having fewer solutions, which intuitively should make the problems harder to solve. We examine the relationship between search cost and number of solutions at different points across the phase transition, for three different local search procedures, across two problem classes (CSP and SAT). Our findings show that there is a significant correlation, which changes as we move through the phase transition. Keywords: computational complexity, constraint satisfaction, propositional satisfiability, search 1 Introduction Local search has been proposed as a good candidate for solving the "hard" but soluble problems that turn up at the phase transition in solubility f...
Beyond NP: the QSAT phase transition
, 1999
"... We show that phase transition behavior similar to that observed in NPcomplete problems like random 3Sat occurs further up the polynomial hierarchy in problems like random 2Qsat. The differences between Qsat and Sat in phase transition behavior that Cadoli et al report are largely due to the ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
We show that phase transition behavior similar to that observed in NPcomplete problems like random 3Sat occurs further up the polynomial hierarchy in problems like random 2Qsat. The differences between Qsat and Sat in phase transition behavior that Cadoli et al report are largely due to the presence of trivially unsatisfiable problems. Once they are removed, we see behavior more familiar from Sat and other NPcomplete domains. There are, however, some differences. Problems with short clauses show a large gap between worst case behavior and median, and the easyhardeasy pattern is restricted to higher percentiles of search cost. We compute
Backbone Fragility and the Local Search Cost Peak
 Journal of Artificial Intelligence Research
, 2000
"... The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably e#ective at solving hard Random 3SAT instances near the socalled `satisfiability threshold', but still shows a peak in search cost near the threshold and large va ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably e#ective at solving hard Random 3SAT instances near the socalled `satisfiability threshold', but still shows a peak in search cost near the threshold and large variations in cost over di#erent instances. We make a number of significant contributions to the analysis of WSat on highcost random instances, using the recentlyintroduced concept of the backbone of a SAT instance. The backbone is the set of literals which are entailed by an instance. We find that the number of solutions predicts the cost well for smallbackbone instances but is much less relevant for the largebackbone instances which appear near the threshold and dominate in the overconstrained region. We show a very strong correlation between search cost and the Hamming distance to the nearest solution early in WSat's search. This pattern leads us to introduce a measure of the ba...