Results 1  10
of
42
From sampling to model counting
 In Proc. IJCAI’07
, 2007
"... We introduce a new technique for counting models of Boolean satisfiability problems. Our approach incorporates information obtained from sampling the solution space. Unlike previous approaches, our method does not require uniform or nearuniform samples. It instead converts local search sampling wit ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
We introduce a new technique for counting models of Boolean satisfiability problems. Our approach incorporates information obtained from sampling the solution space. Unlike previous approaches, our method does not require uniform or nearuniform samples. It instead converts local search sampling without any guarantees into very good bounds on the model count with guarantees. We give a formal analysis and provide experimental results showing the effectiveness of our approach. 1
Nearuniform sampling of combinatorial spaces using xor constraints
 In NIPS. 2007
"... We propose a new technique for sampling the solutions of combinatorial problems in a nearuniform manner. We focus on problems specified as a Boolean formula, i.e., on SAT instances. Sampling for SAT problems has been shown to have interesting connections with probabilistic reasoning, making practic ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
We propose a new technique for sampling the solutions of combinatorial problems in a nearuniform manner. We focus on problems specified as a Boolean formula, i.e., on SAT instances. Sampling for SAT problems has been shown to have interesting connections with probabilistic reasoning, making practical sampling algorithms for SAT highly desirable. The best current approaches are based on Markov Chain Monte Carlo methods, which have some practical limitations. Our approach exploits combinatorial properties of random parity (XOR) constraints to prune away solutions nearuniformly. The final sample is identified amongst the remaining ones using a stateoftheart SAT solver. The resulting sampling distribution is provably arbitrarily close to uniform. Our experiments show that our technique achieves a significantly better sampling quality than the best alternative. 1
Model Counting
, 2008
"... Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula, i.e., the number of distinct truth assignments to variables for which the formula evaluates to true. For a propositional formula F, we will use #F to denote the model count of F. ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula, i.e., the number of distinct truth assignments to variables for which the formula evaluates to true. For a propositional formula F, we will use #F to denote the model count of F. This problem is also referred to as the solution counting problem for SAT. It generalizes SAT and is the canonical #Pcomplete problem. There has been significant theoretical work trying to characterize the worstcase complexity of counting problems, with some surprising results such as model counting being hard even for some polynomialtime solvable problems like 2SAT. The model counting problem presents fascinating challenges for practitioners and poses several new research questions. Efficient algorithms for this problem will have a significant impact on many application areas that are inherently beyond SAT (‘beyond ’ under standard complexity theoretic assumptions), such as boundedlength adversarial and contingency planning, and probabilistic reasoning. For example, various probabilistic inference problems, such as Bayesian net reasoning, can be effectively translated into model counting problems [cf.
Leveraging belief propagation, backtrack search, and statistics for model counting
"... Abstract. We consider the problem of estimating the model count (number of solutions) of Boolean formulas, and present two techniques that compute estimates of these counts, as well as either lower or upper bounds with different tradeoffs between efficiency, bound quality, and correctness guarantee ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of estimating the model count (number of solutions) of Boolean formulas, and present two techniques that compute estimates of these counts, as well as either lower or upper bounds with different tradeoffs between efficiency, bound quality, and correctness guarantee. For lower bounds, we use a recent framework for probabilistic correctness guarantees, and exploit message passing techniques for marginal probability estimation, namely, variations of Belief Propagation (BP). Our results suggest that BP provides useful information even on structured loopy formulas. For upper bounds, we perform multiple runs of the MiniSat SAT solver with a minor modification, and obtain statistical bounds on the model count based on the observation that the distribution of a certain quantity of interest is often very close to the normal distribution. Our experiments demonstrate that our model counters based on these two ideas, BPCount and MiniCount, can provide very good bounds in time significantly less than alternative approaches. 1
Bayesian network learning by compiling to weighted MAXSAT
"... The problem of learning discrete Bayesian networks from data is encoded as a weighted MAXSAT problem and the MaxWalkSat local search algorithm is used to address it. For each dataset, the pervariable summands of the (BDeu) marginal likelihood for different choices of parents (‘family scores’) are ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
The problem of learning discrete Bayesian networks from data is encoded as a weighted MAXSAT problem and the MaxWalkSat local search algorithm is used to address it. For each dataset, the pervariable summands of the (BDeu) marginal likelihood for different choices of parents (‘family scores’) are computed prior to applying MaxWalkSat. Each permissible choice of parents for each variable is encoded as a distinct propositional atom and the associated family score encoded as a ‘soft ’ weighted singleliteral clause. Two approaches to enforcing acyclicity are considered: either by encoding the ancestor relation or by attaching a total order to each graph and encoding that. The latter approach gives better results. Learning experiments have been conducted on 21 synthetic datasets sampled from 7 BNs. The largest dataset has 10,000 datapoints and 60 variables producing (for the ‘ancestor ’ encoding) a weighted CNF input file with 19,932 atoms and 269,367 clauses. For most datasets, MaxWalkSat quickly finds BNs with higher BDeu score than the ‘true ’ BN. The effect of adding prior information is assessed. It is further shown that Bayesian model averaging can be effected by collecting BNs generated during the search. 1
Learning efficient Markov networks.
 In Proceedings of the 24th conference on Neural Information Processing Systems,
, 2010
"... Abstract We present an algorithm for learning hightreewidth Markov networks where inference is still tractable. This is made possible by exploiting contextspecific independence and determinism in the domain. The class of models our algorithm can learn has the same desirable properties as thin jun ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Abstract We present an algorithm for learning hightreewidth Markov networks where inference is still tractable. This is made possible by exploiting contextspecific independence and determinism in the domain. The class of models our algorithm can learn has the same desirable properties as thin junction trees: polynomial inference, closedform weight learning, etc., but is much broader. Our algorithm searches for a feature that divides the state space into subspaces where the remaining variables decompose into independent subsets (conditioned on the feature and its negation) and recurses on each subspace/subset of variables until no useful new features can be found. We provide probabilistic performance guarantees for our algorithm under the assumption that the maximum feature length is bounded by a constant k (the treewidth can be much larger) and dependences are of bounded strength. We also propose a greedy version of the algorithm that, while forgoing these guarantees, is much more efficient. Experiments on a variety of domains show that our approach outperforms many stateoftheart Markov network structure learners.
Backdoors to satisfaction
 The Multivariate Algorithmic Revolution and Beyond  Essays Dedicated to Michael R. Fellows on the Occasion of His 60th Birthday, volume 7370 of Lecture
"... ar ..."
(Show Context)
Solving MAP Exactly by Searching on Compiled Arithmetic Circuits
 Proceedings of the 21st National Conference on Artificial Intelligence (AAAI06
, 2006
"... The MAP (maximum a posteriori hypothesis) problem in Bayesian networks is to find the most likely states of a set of variables given partial evidence on the complement of that set. Standard structurebased inference methods for finding exact solutions to MAP, such as variable elimination and jointre ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The MAP (maximum a posteriori hypothesis) problem in Bayesian networks is to find the most likely states of a set of variables given partial evidence on the complement of that set. Standard structurebased inference methods for finding exact solutions to MAP, such as variable elimination and jointree algorithms, have complexities that are exponential in the constrained treewidth of the network. A more recent algorithm, proposed by Park and Darwiche, is exponential only in the treewidth and has been shown to handle networks whose constrained treewidth is quite high. In this paper we present a new algorithm for exact MAP that is not necessarily limited in scalability even by the treewidth. This is achieved by leveraging recent advances in compilation of Bayesian networks into arithmetic circuits, which can circumvent treewidthimposed limits by exploiting the local structure present in the network. Specifically, we implement a branchandbound search where the bounds are computed using lineartime operations on the compiled arithmetic circuit. On networks with local structure, we observe ordersofmagnitude improvements over the algorithm of Park and Darwiche. In particular, we are able to efficiently solve many problems where the latter algorithm runs out of memory because of high treewidth.
A Scalable Approximate Model Counter
"... Abstract. Propositional model counting (#SAT), i.e., counting the number of satisfying assignments of a propositional formula, is a problem of significant theoretical and practical interest. Due to the inherent complexity of the problem, approximate model counting, which counts the number of satisfy ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Propositional model counting (#SAT), i.e., counting the number of satisfying assignments of a propositional formula, is a problem of significant theoretical and practical interest. Due to the inherent complexity of the problem, approximate model counting, which counts the number of satisfying assignments to within given tolerance and confidence level, was proposed as a practical alternative to exact model counting. Yet, approximate model counting has been studied essentially only theoretically. The only reported implementation of approximate model counting, due to Karp and Luby, worked only for DNF formulas. A few existing tools for CNF formulas are bounding model counters; they can handle realistic problem sizes, but fall short of providing counts within given tolerance and confidence, and, thus, are not approximate model counters. We present here a novel algorithm, as well as a reference implementation, that is the first scalable approximate model counter for CNF formulas. The algorithm works by issuing a polynomial number of calls to a SAT solver. Our tool, ApproxMC, scales to formulas with tens of thousands of variables. Careful experimental comparisons show that ApproxMC reports, with high confidence, bounds that are close to the exact count, and also succeeds in reporting bounds with small tolerance and high confidence in cases that are too large for computing exact model counts. 1
Solving #SAT and Bayesian inference with backtracking search
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 2009
"... Inference in Bayes Nets (BAYES) is an important problem with numerous applications in probabilistic reasoning. Counting the number of satisfying assignments of a propositional formula (#SAT) is a closely related problem of fundamental theoretical importance. Both these problems, and others, are memb ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Inference in Bayes Nets (BAYES) is an important problem with numerous applications in probabilistic reasoning. Counting the number of satisfying assignments of a propositional formula (#SAT) is a closely related problem of fundamental theoretical importance. Both these problems, and others, are members of the class of sumofproducts (SUMPROD) problems. In this paper we show that standard backtracking search when augmented with a simple memoization scheme (caching) can solve any sumofproducts problem with time complexity that is at least as good any other stateoftheart exact algorithm, and that it can also achieve the best known timespace tradeoff. Furthermore, backtrackingâs ability to utilize more flexible variable orderings allows us to prove that it can achieve an exponential speedup over other standard algorithms for SUMPROD on some instances. The ideas presented here have been utilized in a number of solvers that have been applied to various types of sumofproduct problems. These systemâs have exploited the fact that backtracking can naturally exploit more of the problemâs structure to achieve improved performance on a range of problem instances. Empirical evidence of this performance gain has appeared in published works describing these solvers, and we provide references to these works.