Results 1  10
of
51
A compiler for deterministic, decomposable negation normal form
 In AAAI02
"... We present a compiler for converting CNF formulas into deterministic, decomposable negation normal form (dDNNF). This is a logical form that has been identified recently and shown to support a number of operations in polynomial time, including clausal entailment; model counting, minimization and e ..."
Abstract

Cited by 54 (12 self)
 Add to MetaCart
(Show Context)
We present a compiler for converting CNF formulas into deterministic, decomposable negation normal form (dDNNF). This is a logical form that has been identified recently and shown to support a number of operations in polynomial time, including clausal entailment; model counting, minimization and enumeration; and probabilistic equivalence testing. dDNNFs are also known to be a superset of, and more succinct than, OBDDs. The polytime logical operations supported by dDNNFs are a subset of those supported by OBDDs, yet are sufficient for modelbased diagnosis and planning applications. We present experimental results on compiling a variety of CNF formulas, some generated randomly and others corresponding to digital circuits. A number of the formulas we were able to compile efficiently could not be similarly handled by some stateoftheart model counters, nor by some stateoftheart OBDD compilers.
Algorithms and Complexity Results for #SAT and Bayesian Inference
 IN 44TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS
, 2004
"... Bayesian inference is an important problem with numerous applications in probabilistic reasoning. Counting satisfying assignments is a closely related problem of fundamental theoretical importance. In this paper, we show that plain old DPLL equipped with memoization (an algorithm we call #DPLLCache) ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
Bayesian inference is an important problem with numerous applications in probabilistic reasoning. Counting satisfying assignments is a closely related problem of fundamental theoretical importance. In this paper, we show that plain old DPLL equipped with memoization (an algorithm we call #DPLLCache) can solve both of these problems with time complexity that is at least as good as stateoftheart exact algorithms, and that it can also achieve the best known timespace tradeoff. We then proceed to show that there are instances where #DPLLCache can achieve an exponential speedup over existing algorithms.
Hybrid backtracking bounded by treedecomposition of constraint networks
 ARTIFICIAL INTELLIGENCE
, 2003
"... We propose a framework for solving CSPs based both on backtracking techniques and on the notion of treedecomposition of the constraint networks. This mixed approach permits us to define a new framework for the enumeration, which we expect that it will benefit from the advantages of two approaches: ..."
Abstract

Cited by 50 (14 self)
 Add to MetaCart
We propose a framework for solving CSPs based both on backtracking techniques and on the notion of treedecomposition of the constraint networks. This mixed approach permits us to define a new framework for the enumeration, which we expect that it will benefit from the advantages of two approaches: a practical efficiency of enumerative algorithms and a warranty of a limited time complexity by an approximation of the treewidth of the constraint networks. Finally, experimental results allow us to show the advantages of this approach.
Value Elimination: Bayesian Inference via Backtracking Search
 IN UAI03
, 2003
"... We present Value Elimination, a new algorithm for Bayesian Inference. Given the same variable ordering information, Value Elimination can achieve performance that is within a constant factor of variable elimination or recursive conditioning, and on some problems it can perform exponentially bet ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
(Show Context)
We present Value Elimination, a new algorithm for Bayesian Inference. Given the same variable ordering information, Value Elimination can achieve performance that is within a constant factor of variable elimination or recursive conditioning, and on some problems it can perform exponentially better, irrespective of the variable ordering used by these algorithms. Value Elimination
A Lightweight Component Caching Scheme for Satisfiability Solvers
 In 10th International Conference on Theory and Applications of Satisfiability Testing
, 2007
"... Abstract. We introduce in this paper a lightweight technique for reducing work repetition caused by non–chronological backtracking commonly practiced by DPLL–based SAT solvers. The presented technique can be viewed as a partial component caching scheme. Empirical evaluation of the technique reveals ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce in this paper a lightweight technique for reducing work repetition caused by non–chronological backtracking commonly practiced by DPLL–based SAT solvers. The presented technique can be viewed as a partial component caching scheme. Empirical evaluation of the technique reveals significant improvements on a broad range of industrial instances. 1
AND/OR branchandbound search for combinatorial optimization in graphical models
, 2008
"... We introduce a new generation of depthfirst BranchandBound algorithms that explore the AND/OR search tree using static and dynamic variable orderings for solving general constraint optimization problems. The virtue of the AND/OR representation of the search space is that its size may be far small ..."
Abstract

Cited by 26 (16 self)
 Add to MetaCart
(Show Context)
We introduce a new generation of depthfirst BranchandBound algorithms that explore the AND/OR search tree using static and dynamic variable orderings for solving general constraint optimization problems. The virtue of the AND/OR representation of the search space is that its size may be far smaller than that of a traditional OR representation, which can translate into significant time savings for search algorithms. The focus of this paper is on linear space search which explores the AND/OR search tree rather than the search graph and therefore make no attempt to cache information. We investigate the power of the minibucket heuristics within the AND/OR search space, in both static and dynamic setups. We focus on two most common optimization problems in graphical models: finding the Most Probable Explanation (MPE) in Bayesian networks and solving Weighted CSPs (WCSP). In extensive empirical evaluations we demonstrate that the new AND/OR BranchandBound approach improves considerably over the traditional OR search strategy and show how various variable ordering schemes impact the performance of the AND/OR search scheme.
On probabilistic inference by weighted model counting
 Artificial Intelligence
"... A recent and effective approach to probabilistic inference calls for reducing the problem to one of weighted model counting (WMC) on a propositional knowledge base. Specifically, the approach calls for encoding the probabilistic model, typically a Bayesian network, as a propositional knowledge base ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
A recent and effective approach to probabilistic inference calls for reducing the problem to one of weighted model counting (WMC) on a propositional knowledge base. Specifically, the approach calls for encoding the probabilistic model, typically a Bayesian network, as a propositional knowledge base in conjunctive normal form (CNF) with weights associated to each model according to the network parameters. Given this CNF, computing the probability of some evidence becomes a matter of summing the weights of all CNF models consistent with the evidence. A number of variations on this approach have appeared in the literature recently, that vary across three orthogonal dimensions. The first dimension concerns the specific encoding used to convert a Bayesian network into a CNF. The second dimensions relates to whether weighted model counting is performed using a search algorithm on the CNF, or by compiling the CNF into a structure that renders WMC a polytime operation in the size of the compiled structure. The third dimension deals with the specific properties of network parameters (local structure) which are captured in the CNF encoding. In this paper, we discuss recent work in this area across the above three dimensions, and demonstrate empirically its practical importance in significantly expanding the reach of exact probabilistic inference. We restrict our discussion to exact inference and model counting, even though other proposals have been extended for approximate inference and approximate model counting.
A structurebased variable ordering heuristic for SAT
 In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI’03
, 2003
"... We propose a variable ordering heuristic for SAT, which is based on a structural analysis of the SAT problem. We show that when the heuristic is used by a DavisPutnam SAT solver that employs conflictdirected backtracking, it produces a divideandconquer behavior in which the SAT problem is recurs ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We propose a variable ordering heuristic for SAT, which is based on a structural analysis of the SAT problem. We show that when the heuristic is used by a DavisPutnam SAT solver that employs conflictdirected backtracking, it produces a divideandconquer behavior in which the SAT problem is recursively decomposed into smaller problems that are solved independently. We discuss the implications of this divideandconquer behavior on our ability to provide structurebased guarantees on the complexity of DavisPutnam SAT solvers. We also report on the integration of this heuristic with ZChaff — a stateoftheart SAT solver—showing experimentally that it significantly improves performance on a range of benchmark problems that exhibit structure. 1
DPLL with a trace: From SAT to knowledge compilation
 IJCAI05
, 2005
"... We show that the trace of an exhaustive DPLL search can be viewed as a compilation of the propositional theory. With different constraints imposed or lifted on the DPLL algorithm, this compilation will belong to the language of dDNNF, FBDD, and OBDD, respectively. These languages are decreasingly s ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
We show that the trace of an exhaustive DPLL search can be viewed as a compilation of the propositional theory. With different constraints imposed or lifted on the DPLL algorithm, this compilation will belong to the language of dDNNF, FBDD, and OBDD, respectively. These languages are decreasingly succinct, yet increasingly tractable, supporting such polynomialtime queries as model counting and equivalence testing. Our contribution is thus twofold. First, we provide a uniform framework, supported by empirical evaluations, for compiling knowledge into various languages of interest. Second, we show that given a particular variant of DPLL, by identifying the language membership of its traces, one gains a fundamental understanding of the intrinsic complexity and computational power of the search algorithm itself. As interesting examples, we unveil the “hidden power” of several recent model counters, point to one of their potential limitations, and identify a key limitation of DPLLbased procedures in general.
Approximate counting by sampling the backtrackfree search space
 In AAAI
, 2007
"... We present a new estimator for counting the number of solutions of a Boolean satisfiability problem as a part of an importance sampling framework. The estimator uses the recently introduced SampleSearch scheme that is designed to overcome the rejection problem associated with distributions having a ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
We present a new estimator for counting the number of solutions of a Boolean satisfiability problem as a part of an importance sampling framework. The estimator uses the recently introduced SampleSearch scheme that is designed to overcome the rejection problem associated with distributions having a substantial amount of determinism. We show here that the sampling distribution of SampleSearch can be characterized as the backtrackfree distribution and propose several schemes for its computation. This allows integrating SampleSearch into the importance sampling framework for approximating the number of solutions and also allows using SampleSearch for computing a lower bound measure on the number of solutions. Our empirical evaluation demonstrates the superiority of our new approximate counting schemes against recent competing approaches.