Results 1  10
of
20
Improvements To Propositional Satisfiability Search Algorithms
, 1995
"... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400variable 3SAT problems in about 2 hours on the average. In general, it can solve hard nvariable ..."
Abstract

Cited by 161 (0 self)
 Add to MetaCart
... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400variable 3SAT problems in about 2 hours on the average. In general, it can solve hard nvariable random 3SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NPcomplete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research.
Testing Heuristics: We Have It All Wrong
 Journal of Heuristics
, 1995
"... The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not w ..."
Abstract

Cited by 119 (2 self)
 Add to MetaCart
The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not why. Because it requires polished code, it consumes time and energy that could be spent doing more experiments. This paper argues that a more scientific approach of controlled experimentation, similar to that used in other empirical sciences, avoids or alleviates these problems. We have confused research and development; competitive testing is suited only for the latter. Most experimental studies of heuristic algorithms resemble track meets more than scientific endeavors. Typically an investigator has a bright idea for a new algorithm and wants to show that it works better, in some sense, than known algorithms. This requires computational tests, perhaps on a standard set of benchmark p...
A DavisPutnam Based Enumeration Algorithm for Linear PseudoBoolean Optimization
, 1995
"... The DavisPutnam enumeration method (DP) has recently become one of the fastest known methods for solving the clausal satisfiability problem of propositional calculus. We present a generalization of the DPprocedure for solving the satisfiability problem of a set of linear pseudoBoolean (or 01) ..."
Abstract

Cited by 102 (1 self)
 Add to MetaCart
The DavisPutnam enumeration method (DP) has recently become one of the fastest known methods for solving the clausal satisfiability problem of propositional calculus. We present a generalization of the DPprocedure for solving the satisfiability problem of a set of linear pseudoBoolean (or 01) inequalities. We extend the method to solve linear 01 optimization problems, i.e. optimize a linear pseudoBoolean objective function w.r.t. a set of linear pseudoBoolean inequalities. The algorithm compares well with traditional linear programming based methods on a variety of standard 01 integer programming benchmarks. Keywords 01 Integer Programming; Propositional Calculus; Enumeration Contents 1 Introduction 1 2 Preliminaries 1 3 The Classical DavisPutnam Procedure 3 4 DavisPutnam for Linear PseudoBoolean Inequalities 5 5 Optimizing with PseudoBoolean DavisPutnam 7 6 Implementation 8 7 Heuristics 10 8 Computational Results 10 9 Conclusion 12 1 Introduction The DavisPutn...
Needed: An Empirical Science Of Algorithms
 Operations Research
, 1994
"... this article goes to press. Journal editors can be encouraged to seek out referees who have done rigorous empirical studies. Refereeing standards will evolve, particularly as the empirical science develops. ..."
Abstract

Cited by 73 (3 self)
 Add to MetaCart
this article goes to press. Journal editors can be encouraged to seek out referees who have done rigorous empirical studies. Refereeing standards will evolve, particularly as the empirical science develops.
Stochastic Boolean Satisfiability
 Journal of Automated Reasoning
, 2000
"... . Satisfiability problems and probabilistic models are core topics of artificial intelligence and computer science. This paper looks at the rich intersection between these two areas, opening the door for the use of satisfiability approaches in probabilistic domains. The paper examines a generic stoc ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
. Satisfiability problems and probabilistic models are core topics of artificial intelligence and computer science. This paper looks at the rich intersection between these two areas, opening the door for the use of satisfiability approaches in probabilistic domains. The paper examines a generic stochastic satisfiability problem, SSat, which can function for probabilistic domains as Sat does for deterministic domains. It shows the connection between SSat and well studied problems in belief network inference and planning under uncertainty, and defines algorithms, both systematic and stochastic, for solving SSat instances. These algorithms are validated on random SSat formulae generated under the fixedclause model. In spite of the large complexity gap between SSat (PSPACE) and Sat (NP), the paper suggests that much of what we've learned about Sat transfers to the probabilistic domain. 1. Introduction There has been a recent focus in artificial intelligence (AI) on solving problems exh...
G.: Logicbased benders decomposition
 Mathematical Programming
, 2003
"... Benders decomposition uses a strategy of “learning from one’s mistakes.” The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an “inference dual. ” Solution of the inference dual take ..."
Abstract

Cited by 45 (10 self)
 Add to MetaCart
Benders decomposition uses a strategy of “learning from one’s mistakes.” The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an “inference dual. ” Solution of the inference dual takes the form of a logical deduction that yields Benders cuts. The dual is therefore very different from other generalized duals that have been proposed. The approach is illustrated by working out the details for propositional satisfiability and 01 programming problems. Computational tests are carried out for the latter, but the most promising contribution of logicbased Benders may be to provide a framework for combining optimization and constraint programming methods.
A Grasp For Satisfiability
 CLIQUES, COLORING, AND SATISFIABILITY: THE SECOND DIMACS IMPLEMENTATION CHALLENGE, VOLUME 26 OF DIMACS SERIES ON DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE
, 1996
"... A greedy randomized adaptive search procedure (Grasp) is a randomized heuristic that has been shown to quickly produce good quality solutions for a wide variety of combinatorial optimization problems. In this paper, we describe a Grasp for the satisfiability (SAT) problem. This algorithm can be also ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
A greedy randomized adaptive search procedure (Grasp) is a randomized heuristic that has been shown to quickly produce good quality solutions for a wide variety of combinatorial optimization problems. In this paper, we describe a Grasp for the satisfiability (SAT) problem. This algorithm can be also directly applied to both the weighted and unweighted versions of the maximum satisfiability (MAXSAT) problem. We review basic concepts of Grasp: construction and local search algorithms. The implementation of Grasp for the SAT problem is described in detail. Computational experience on a large set of test problems is presented.
Partial Instantiation Methods for Inference in First Order Logic
 Journal of Automated Reasoning
, 2000
"... Satisfiability algorithms for propositional logic have improved enormously in recently years. This increases the attractiveness of satisfiability methods for first order logic that reduce the problem to a series of groundlevel satisfiability problems. R. Jeroslow introduced a partial instantiati ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Satisfiability algorithms for propositional logic have improved enormously in recently years. This increases the attractiveness of satisfiability methods for first order logic that reduce the problem to a series of groundlevel satisfiability problems. R. Jeroslow introduced a partial instantiation method of this kind that differs radically from the standard resolutionbased methods. This paper lays the theoretical groundwork for an extension of his method that is general enough and efficient enough for general logic programming with indefinite clauses. In particular we improve Jeroslow's approach by (a) extending it to logic with functions, (b) accelerating it through the use of satisfiers, as introduced by Gallo and Rago, and (c) simplifying it to obtain further speedup. We provide a similar development for a "dual" partial instantiation approach defined by Hooker and suggest a primal/dual strategy. We prove correctness of the primal and dual algorithms for full firstorder ...
Simultaneous Construction of Refutations and Models for Propositional Formulas
, 1995
"... Methodology is developed to attempt to construct simultaneously either a refutation or a model for a propositional formula in conjunctive normal form. The method exploits the concept of "autarky", which was introduced by Monien and Speckenmeyer. Informally, an autarky is a "selfsufficient" model ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Methodology is developed to attempt to construct simultaneously either a refutation or a model for a propositional formula in conjunctive normal form. The method exploits the concept of "autarky", which was introduced by Monien and Speckenmeyer. Informally, an autarky is a "selfsufficient" model for some clauses, but which does not affect the remaining clauses of the formula. Whereas their work was oriented toward finding a model, our method has as its primary goal to find a refutation in the style of model elimination. It also finds a model if it fails to find a refutation, essentially by combining autarkies. However, the autarkyrelated processing is integrated with the refutation search, and can greatly improve the efficiency of that search even when a refutation does exist. Unlike the pruning strategies of most refinements of resolution, autarkyrelated pruning does not prune any successful refutation; it only prunes attempts that ultimately will be unsuccessful; conseque...
New Constructs for the Description of Combinatorial Optimization Problems in Algebraic Modeling Languages
 Computational Optimization and Applications
, 1996
"... Algebraic languages are at the heart of many successful optimization modeling systems, yet they have been used with only limited success for combinatorial (or discrete) optimization. We show in this paper, through a series of examples, how an algebraic modeling language might be extended to help wit ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Algebraic languages are at the heart of many successful optimization modeling systems, yet they have been used with only limited success for combinatorial (or discrete) optimization. We show in this paper, through a series of examples, how an algebraic modeling language might be extended to help with a greater variety of combinatorial optimization problems. We consider specifically those problems that are readily expressed as the choice of a subset from a certain set of objects, rather than as the assignment of numerical values to variables. Since there is no practicable universal algorithm for problems of this kind, we explore a hybrid approach that employs a generalpurpose subset enumeration scheme together with problemspecific directives to guide an efficient search. Published as: