Results 1  10
of
58
The Constrainedness of Search
 In Proceedings of AAAI96
, 1999
"... We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition ..."
Abstract

Cited by 117 (26 self)
 Add to MetaCart
We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition of constrainedness generalizes a number of parameters used to study phase transition behaviour in a wide variety of problem domains. As well as predicting the location of phase transitions in solubility, constrainedness provides insight into why problems at phase transitions tend to be hard to solve. Such problems are on a constrainedness "knifeedge", and we must search deep into the problem before they look more or less soluble. Heuristics that try to get off this knifeedge as quickly as possible by, for example, minimizing the constrainedness are often very effective. We show that heuristics from a wide variety of problem domains can be seen as minimizing the constrainedness (or proxies ...
Improved Limited Discrepancy Search
 In Proceedings of AAAI96
, 1996
"... We present an improvement to Harvey and Ginsberg's limited discrepancy search algorithm. Our version eliminates much of the redundancy in the original algorithm, generating each search path from the root to the maximum search depth only once. For a uniformdepth binary tree of depth d, this reduces ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
We present an improvement to Harvey and Ginsberg's limited discrepancy search algorithm. Our version eliminates much of the redundancy in the original algorithm, generating each search path from the root to the maximum search depth only once. For a uniformdepth binary tree of depth d, this reduces the asymptotic complexity from O( d+2 2 2 d ) to O(2 d ). The savings is much less in a partial tree search, or in a heavily pruned tree. We also show that the overhead of the improved algorithm on a uniformdepth bary tree is only a factor of b=(b\Gamma1) compared to depthfirst search. This constant factor is greater on a heavily pruned tree. Finally, we present empirical results showing the utility of limited discrepancy search, as a function of problem difficulty, on the NPComplete problem of number partitioning. 1 Introduction: Limited Discrepancy Search The bestknown treesearch algorithms are breadthfirst and depthfirst search. Breadthfirst search is rarely used in pra...
From approximate to optimal solutions: A case study of number partitioning
 In Proc. of the 14th IJCAI
, 1995
"... Given a set of numbers, the twoway partitioning problem is to divide them into two subsets, so that the sum of the numbers in each subset are as nearly equal as possible. The problem is NPcomplete, and is contained in many scheduling applications. Based on a polynomialtime heuristic due to Karmar ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Given a set of numbers, the twoway partitioning problem is to divide them into two subsets, so that the sum of the numbers in each subset are as nearly equal as possible. The problem is NPcomplete, and is contained in many scheduling applications. Based on a polynomialtime heuristic due to Karmarkar and Karp, we present a new algorithm, called Complete Karmarkar Karp (CKK), that optimally solves the general numberpartitioning problem. CKK significantly outperforms the best previouslyknown algorithms for this problem. By restricting the numbers to twelve significant digits, we can optimally solve twoway partitioning problems of arbitrary size in practice. CKK first returns the KarmarkarKarp solution, then continues to find better solutions as time allows. Almost five orders of magnitude improvement in solution quality is obtained within a minute of running time. Rather than building a single solution one element at a time, CKK constructs subsolutions, and combines them in all possible ways. CKK is directly applicable to the 0/1 knapsack problem, since it can be reduced to number partitioning. This general approach may also be applicable to other NPhard problems as well. 1
Analysis of heuristics for number partitioning
 Computational Intelligence
, 1998
"... We illustrate the use of phase transition behavior in the study of heuristics. Using an “annealed ” theory, we define a parameter that measures the “constrainedness ” of an ensemble of number partitioning problems. We identify a phase transition at a critical value of constrainedness. We then show t ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
We illustrate the use of phase transition behavior in the study of heuristics. Using an “annealed ” theory, we define a parameter that measures the “constrainedness ” of an ensemble of number partitioning problems. We identify a phase transition at a critical value of constrainedness. We then show that constrainedness can be used to analyze and compare algorithms and heuristics for number partitioning in a precise and quantitative manner. For example, we demonstrate that on uniform random problems both the Karmarkar–Karp and greedy heuristics minimize the constrainedness, but that the decisions made by the Karmarkar–Karp heuristic are superior at reducing constrainedness. This supports the better performance observed experimentally for the Karmarkar–Karp heuristic. Our results refute a conjecture of Fu that phase transition behavior does not occur in number partitioning. Additionally, they demonstrate that phase transition behavior is useful for more than just simple benchmarking. It can, for instance, be used to analyze heuristics, and to compare the quality of heuristic solutions. Key words: heuristics, number partitioning, phase transitions. 1.
Easily Searched Encodings for Number Partitioning
, 1996
"... Can stochastic search algorithms outperform existing deterministic heuristics for the NPhard problem Number Partitioning if given a sufficient, but practically realizable amount of time? In a thorough empirical investigation using a straightforward implementation of one such algorithm, simulated an ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Can stochastic search algorithms outperform existing deterministic heuristics for the NPhard problem Number Partitioning if given a sufficient, but practically realizable amount of time? In a thorough empirical investigation using a straightforward implementation of one such algorithm, simulated annealing, Johnson et al. (1991) concluded tentatively that the answer is "no." In this paper we show that the answer can be "yes" if attention is devoted to the issue of problem representation (encoding). We present results from empirical tests of several encodings of Number Partitioning with problem instances consisting of multipleprecision integers drawn from a uniform probability distribution. With these instances and with an appropriate choice of representation, stochastic and deterministic searches canroutinely and in a practical amount of timefind solutions several orders of magnitude better than those constructed by the best heuristic known (Karmarkar and Karp, 1982), which does...
Phase transition and finitesize scaling for the integer partitioning problem
, 2001
"... Dedicated to D. E. Knuth on the occasion of his 64th birthday. Abstract. We consider the problem of partitioning n randomly chosen integers between 1 and 2 m into two subsets such that the discrepancy, the absolute value of the difference of their sums, is minimized. A partition is called perfect if ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Dedicated to D. E. Knuth on the occasion of his 64th birthday. Abstract. We consider the problem of partitioning n randomly chosen integers between 1 and 2 m into two subsets such that the discrepancy, the absolute value of the difference of their sums, is minimized. A partition is called perfect if the optimum discrepancy is 0 when the sum of all n integers in the original set is even, or 1 when the sum is odd. Parameterizing the random problem in terms of κ = m/n, we prove that the problem has a phase transition at κ = 1, in the sense that for κ < 1, there are many perfect partitions with probability tending to 1 as n → ∞, while for κ> 1, there are no perfect partitions with probability tending to 1. Moreover, we show that this transition is firstorder in the sense the derivative of the socalled entropy is discontinuous at κ = 1. We also determine the finitesize scaling window about the transition point: κn = 1 − (2n) −1 log 2 n + λn/n, by showing that the probability of a perfect partition tends to 1, 0, or some explicitly computable p(λ) ∈ (0, 1), depending on whether λn tends to −∞, ∞, or λ ∈ (−∞, ∞), respectively. For λn → − ∞ fast enough, we show that the number of perfect partitions is Gaussian in the limit. For λn → ∞, we prove that with high probability the optimum partition is unique, and that the optimum discrepancy is Θ(2 λn). Within the window, i.e., if λn  is bounded, we prove that the optimum discrepancy is bounded. Both for λn → ∞ and within the window, we find the limiting distribution of the (scaled) discrepancy. Finally, both for the integer partitioning problem and for the continuous partitioning problem, we find the joint distribution of the k smallest discrepancies above the scaling window.
Incomplete Tree Search using Adaptive Probing
, 2001
"... When not enough time is available to fully explore a search tree, different algorithms will visit different leaves. Depthfirst search and depthbounded discrepancy search, for example, make opposite assumptions about the distribution of good leaves. Unfortunately, it is rarely clear a priori which ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
When not enough time is available to fully explore a search tree, different algorithms will visit different leaves. Depthfirst search and depthbounded discrepancy search, for example, make opposite assumptions about the distribution of good leaves. Unfortunately, it is rarely clear a priori which algorithm will be most appropriate for a particular problem. Rather than fixing strong assumptions in advance, we propose an approach in which an algorithm attempts to adjust to the distribution of leaf costs in the tree while exploring it. By sacrificing completeness, such flexible algorithms can exploit information gathered during the search using only weak assumptions. As an example, we show how a simple depthbased additive cost model of the tree can be learned online. Empirical analysis using a generic tree search problem shows that adaptive probing is competitive with systematic algorithms on a variety of hard trees and outperforms them when the nodeordering heuristic makes many mistakes. Results on boolean satisfiability and two different representations of number partitioning confirm these observations. Adaptive probing combines the flexibility and robustness of local search with the ability to take advantage of constructive heuristics.
A hybrid improvement heuristic for the onedimensional bin packing problem
 JOURNAL OF HEURISTICS
, 2004
"... We propose in this work a hybrid improvement procedure for the bin packing problem. This heuristic has several features: the use of lower bounding strategies; the generation of initial solutions by reference to the dual minmax problem; the use of load redistribution based on dominance, differenci ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We propose in this work a hybrid improvement procedure for the bin packing problem. This heuristic has several features: the use of lower bounding strategies; the generation of initial solutions by reference to the dual minmax problem; the use of load redistribution based on dominance, differencing, and unbalancing; and the inclusion of an improvement process utilizing tabu search. Encouraging results have been obtained for a very wide range of benchmark instances, illustrating the robustness of the algorithm. The hybrid improvement procedure compares favourably with all other heuristics in the literature. It improved the best known solutions for many of the benchmark instances and found the largest number of optimal solutions with respect to the other available approximate algorithms.
KBFS: KBestFirst Search
 Annals of Mathematics and Artificial Intelligence
, 2003
"... We introduce a new algorithm, Kbestfirst search (KBFS), which is a generalization of the well known bestfirst search. In KBFS, each iteration simultaneously expands the K best nodes from the openlist (rather than just the best as in BFS). We claim that KBFS outperforms BFS in domains where t ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We introduce a new algorithm, Kbestfirst search (KBFS), which is a generalization of the well known bestfirst search. In KBFS, each iteration simultaneously expands the K best nodes from the openlist (rather than just the best as in BFS). We claim that KBFS outperforms BFS in domains where the heuristic function has large errors in estimation of the real distance to the goal state or does not predict deadends in the search tree. We present empirical results that confirm this claim and show that KBFS outperforms BFS by a factor of 15 on random trees with deadends, and by a factor of 2 and 7 on the Fifteen and TwentyFour tile puzzles, respectively. KBFS also finds better solutions than BFS and hillclimbing for the number partitioning problem. KBFS is only appropriate for finding approximate solutions with inadmissible heuristic functions.
Fontanari. Probabilistic analysis of the number partitioning problem
 Journal of Physics A
, 1998
"... Given a sequence of N positive real numbers {a1, a2,..., aN}, the number partitioning problem consists of partitioning them into two sets such that the absolute value of the difference of the sums of aj over the two sets is minimized. In the case that the aj’s are statistically independent random va ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Given a sequence of N positive real numbers {a1, a2,..., aN}, the number partitioning problem consists of partitioning them into two sets such that the absolute value of the difference of the sums of aj over the two sets is minimized. In the case that the aj’s are statistically independent random variables uniformly distributed in the unit interval, this NPcomplete problem is equivalent to the problem of finding the ground state of an infiniterange, random antiferromagnetic Ising model. We employ the annealed approximation to derive analytical lower bounds to the average value of the difference for the best constrained and unconstrained partitions in the large N limit. Furthermore, we calculate analytically the fraction of metastable states, i.e. states that are stable against all single spin flips, and found that it vanishes like N −3/2. Short Title: number partitioning problem Physics Abstracts: 87.10.+e 64.60.Cn