Results 1  10
of
144
Using Problem Structure for Efficient Clause Learning
 In Proceedings of the 6th International Conference on Theory and Applications of Satisfiability Testing
, 2003
"... DPLL based clause learning algorithms for satisfiability testing are known to work very well in practice. However, like most branchandbound techniques, their performance depends heavily on the variable order used in making branching decisions. We propose a novel way of exploiting the underlying ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
DPLL based clause learning algorithms for satisfiability testing are known to work very well in practice. However, like most branchandbound techniques, their performance depends heavily on the variable order used in making branching decisions. We propose a novel way of exploiting the underlying problem structure to guide clause learning algorithms toward faster solutions. The key idea is to use a higher level problem description, such as a graph or a PDDL specification, to generate a good branching sequence as an aid to SAT solvers.
Learning and Inference in WEIGHTED LOGIC WITH APPLICATION TO NATURAL LANGUAGE PROCESSING
, 2008
"... ..."
Towards an Optimal Separation of Space and Length in Resolution
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY
, 2008
"... Most stateoftheart satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The main bottleneck for such algorithms, other than the obvious one of time, is the amount of memory used. In the field of proof complexity, the resources of time and memory corre ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Most stateoftheart satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The main bottleneck for such algorithms, other than the obvious one of time, is the amount of memory used. In the field of proof complexity, the resources of time and memory correspond to the length and space of resolution proofs. There has been a long line of research trying to understand these proof complexity measures, as well as relating them to the width of proofs, i.e., the size of the largest clause in the proof, which has been shown to be intimately connected with both length and space. While strong results have been proven for length and width, our understanding of space is still quite poor. For instance, it has remained open whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are completely unrelated in the sense that short proofs can be arbitrarily complex with respect to space. In this paper, we present some evidence that the true answer should be that the latter case holds and provide a possible roadmap for how such an optimal separation result could be obtained. We do this by proving a tight bound of Θ ( √ n) on the space needed for socalled pebbling contradictions over pyramid graphs of size n. This yields the first polynomial lower bound on space that is not a consequence of a corresponding lower bound on width, as well as an improvement of the weak separation of space and width in (Nordström 2006) from logarithmic to polynomial. Also, continuing the line of research initiated by (BenSasson 2002) into tradeoffs between different proof complexity measures, we present a simplified proof of the recent lengthspace tradeoff result in (Hertel and Pitassi 2007), and show how our ideas can be used to prove a couple of other exponential tradeoffs in resolution.
New filtering algorithms for combinations of among constraints
, 2008
"... Several combinatorial problems, such as car sequencing and rostering, feature sequence constraints, restricting the number of occurrences of certain values in every subsequence of a given length. We present three new filtering algorithms for the sequence constraint, including the first to establish ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Several combinatorial problems, such as car sequencing and rostering, feature sequence constraints, restricting the number of occurrences of certain values in every subsequence of a given length. We present three new filtering algorithms for the sequence constraint, including the first to establish domain consistency in polynomial time. The filtering algorithms have complementary strengths: One borrows ideas from dynamic programming; another reformulates it as a regular constraint; the last is customized. The last two algorithms establish domain consistency, and the customized one does so in polynomial time. We provide experimental results that show the practical usefulness of each. We also show that the customized algorithm equally applies to a generalized version of the sequence constraint for subsequences of varied lengths. The significant computational advantage of using one generalized sequence constraint over a semantically equivalent combination of among or sequence constraints is demonstrated experimentally.
THE RESOLUTION COMPLEXITY OF INDEPENDENT SETS AND VERTEX COVERS IN RANDOM GRAPHS
"... Abstract. We consider the problem of providing a resolution proof of the statement that a given graph with n vertices and average degree roughly ∆ does not contain an independent set of size k. For randomly chosen graphs and k ≤ n/3, we show that such proofs asymptotically almost surely require size ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. We consider the problem of providing a resolution proof of the statement that a given graph with n vertices and average degree roughly ∆ does not contain an independent set of size k. For randomly chosen graphs and k ≤ n/3, we show that such proofs asymptotically almost surely require size roughly exponential in n/ ∆ 6. This, in particular, implies a 2 Ω(n) lower bound for constant degree graphs, and for ∆ ≈ n 1/6, shows that there are almost always no short resolution proofs for k as large as n/3 even though a maximum independent set is likely to be much smaller, roughly n 5/6 in size. Our result implies that for graphs that are not too dense, almost all instances of the independent set problem are hard for resolution. Further, it provides an unconditional exponential lower bound on the running time of resolutionbased search algorithms for finding a maximum independent set or approximating it within a factor of ∆/(6 ln ∆). We also give relatively simple upper bounds for the problem and show them to be tight for the class of exhaustive backtracking algorithms. We deduce similar complexity results for the related vertex cover problem on random graphs, proving, in particular, that no polynomialtime resolutionbased method can achieve an approximation within a factor of 3/2.
Relaxed DPLL search for MaxSAT
 In 12th SAT
, 2009
"... Abstract. We propose a new incomplete algorithm for the Maximum Satisfiability (MaxSAT) problem on unweighted Boolean formulas, focused specifically on instances for which proving unsatisfiability is already computationally difficult. For such instances, our approach is often able to identify a smal ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. We propose a new incomplete algorithm for the Maximum Satisfiability (MaxSAT) problem on unweighted Boolean formulas, focused specifically on instances for which proving unsatisfiability is already computationally difficult. For such instances, our approach is often able to identify a small number of what we call “bottleneck ” constraints, in time comparable to the time it takes to prove unsatisfiability. These bottleneck constraints can have useful semantic content. Our algorithm uses a relaxation of the standard backtrack search for satisfiability testing (SAT) as a guiding heuristic, followed by a lownoise local search when needed. This allows us to heuristically exploit the power of unit propagation and clause learning. On a test suite consisting of all unsatisfiable industrial instances from SAT Race 2008, our solver, RelaxedMinisat, is the only (MaxSAT) solver capable of identifying a single bottleneck constraint in all but one instance. 1
Optimization With Parity Constraints: From Binary Codes to Discrete Integration
"... Many probabilistic inference tasks involve summations over exponentially large sets. Recently, it has been shown that these problems can be reduced to solving a polynomial number of MAP inference queries for a model augmented with randomly generated parity constraints. By exploiting a connection wit ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Many probabilistic inference tasks involve summations over exponentially large sets. Recently, it has been shown that these problems can be reduced to solving a polynomial number of MAP inference queries for a model augmented with randomly generated parity constraints. By exploiting a connection with maxlikelihood decoding of binary codes, we show that these optimizations are computationally hard. Inspired by iterative message passing decoding algorithms, we propose an Integer Linear Programming (ILP) formulation for the problem, enhanced with new sparsification techniques to improve decoding performance. By solving the ILP through a sequence of LP relaxations, we get both lower and upper bounds on the partition function, which hold with high probability and are much tighter than those obtained with variational methods.
Embed and Project: Discrete Sampling with Universal Hashing
"... We consider the problem of sampling from a probability distribution defined over a highdimensional discrete set, specified for instance by a graphical model. We propose a sampling algorithm, called PAWS, based on embedding the set into a higherdimensional space which is then randomly projected usi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We consider the problem of sampling from a probability distribution defined over a highdimensional discrete set, specified for instance by a graphical model. We propose a sampling algorithm, called PAWS, based on embedding the set into a higherdimensional space which is then randomly projected using universal hash functions to a lowerdimensional subspace and explored using combinatorial search methods. Our scheme can leverage fast combinatorial optimization tools as a blackbox and, unlike MCMC methods, samples produced are guaranteed to be within an (arbitrarily small) constant factor of the true probability distribution. We demonstrate that by using stateoftheart combinatorial search tools, PAWS can efficiently sample from Ising grids with strong interactions and from software verification instances, while MCMC and variational methods fail in both cases. 1
Results 1  10
of
144