Results 1  10
of
25
SampleSearch: Importance Sampling in Presence of Determinism
, 2009
"... The paper focuses on developing effective importance sampling algorithms for mixed probabilistic and deterministic graphical models. The use of importance sampling in such graphical models is problematic because it generates many useless zero weight samples which are rejected yielding an inefficient ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
The paper focuses on developing effective importance sampling algorithms for mixed probabilistic and deterministic graphical models. The use of importance sampling in such graphical models is problematic because it generates many useless zero weight samples which are rejected yielding an inefficient sampling process. To address this rejection problem, we propose the SampleSearch scheme that augments sampling with systematic constraintbased backtracking search. We characterize the bias introduced by the combination of search with sampling, and derive a weighting scheme which yields an unbiased estimate of the desired statistics (e.g. probability of evidence). When computing the weights exactly is too complex, we propose an approximation which has a weaker guarantee of asymptotic unbiasedness. We present results of an extensive empirical evaluation demonstrating that SampleSearch outperforms other schemes in presence of significant amount of determinism.
Solution Counting Algorithms for ConstraintCentered Search Heuristics
"... Constraints have played a central role in cp because they capture key substructures of a problem and efficiently exploit them to boost inference. This paper intends to do the same thing for search, proposing constraintcentered heuristics which guide the exploration of the search space toward areas ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
(Show Context)
Constraints have played a central role in cp because they capture key substructures of a problem and efficiently exploit them to boost inference. This paper intends to do the same thing for search, proposing constraintcentered heuristics which guide the exploration of the search space toward areas that are likely to contain a high number of solutions. We first propose new search heuristics based on solution counting information at the level of individual constraints. We then describe efficient algorithms to evaluate the number of solutions of two important families of constraints: occurrence counting constraints, such as alldifferent, and sequencing constraints, such as regular. In both cases we take advantage of existing filtering algorithms to speed up the evaluation. Experimental results on benchmark problems show the effectiveness of our approach.
Model Counting
, 2008
"... Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula, i.e., the number of distinct truth assignments to variables for which the formula evaluates to true. For a propositional formula F, we will use #F to denote the model count of F. ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula, i.e., the number of distinct truth assignments to variables for which the formula evaluates to true. For a propositional formula F, we will use #F to denote the model count of F. This problem is also referred to as the solution counting problem for SAT. It generalizes SAT and is the canonical #Pcomplete problem. There has been significant theoretical work trying to characterize the worstcase complexity of counting problems, with some surprising results such as model counting being hard even for some polynomialtime solvable problems like 2SAT. The model counting problem presents fascinating challenges for practitioners and poses several new research questions. Efficient algorithms for this problem will have a significant impact on many application areas that are inherently beyond SAT (‘beyond ’ under standard complexity theoretic assumptions), such as boundedlength adversarial and contingency planning, and probabilistic reasoning. For example, various probabilistic inference problems, such as Bayesian net reasoning, can be effectively translated into model counting problems [cf.
Leveraging belief propagation, backtrack search, and statistics for model counting
"... Abstract. We consider the problem of estimating the model count (number of solutions) of Boolean formulas, and present two techniques that compute estimates of these counts, as well as either lower or upper bounds with different tradeoffs between efficiency, bound quality, and correctness guarantee ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of estimating the model count (number of solutions) of Boolean formulas, and present two techniques that compute estimates of these counts, as well as either lower or upper bounds with different tradeoffs between efficiency, bound quality, and correctness guarantee. For lower bounds, we use a recent framework for probabilistic correctness guarantees, and exploit message passing techniques for marginal probability estimation, namely, variations of Belief Propagation (BP). Our results suggest that BP provides useful information even on structured loopy formulas. For upper bounds, we perform multiple runs of the MiniSat SAT solver with a minor modification, and obtain statistical bounds on the model count based on the observation that the distribution of a certain quantity of interest is often very close to the normal distribution. Our experiments demonstrate that our model counters based on these two ideas, BPCount and MiniCount, can provide very good bounds in time significantly less than alternative approaches. 1
Accelerated Adaptive Markov Chain for Partition Function Computation
"... We propose a novel Adaptive Markov Chain Monte Carlo algorithm to compute the partition function. In particular, we show how to accelerate a flat histogram sampling technique by significantly reducing the number of “null moves ” in the chain, while maintaining asymptotic convergence properties. Our ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We propose a novel Adaptive Markov Chain Monte Carlo algorithm to compute the partition function. In particular, we show how to accelerate a flat histogram sampling technique by significantly reducing the number of “null moves ” in the chain, while maintaining asymptotic convergence properties. Our experiments show that our method converges quickly to highly accurate solutions on a range of benchmark instances, outperforming other stateoftheart methods such as IJGP, TRW, and Gibbs sampling both in runtime and accuracy. We also show how obtaining a socalled density of states distribution allows for efficient weight learning in Markov Logic theories.
Computing the Density of States of Boolean Formulas
"... Abstract. In this paper we consider the problem of computing the density of states of a Boolean formula in CNF, a generalization of both MAXSAT and model counting. Given a Boolean formula F, its density of states counts the number of configurations that violate exactly E clauses, for all values of ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we consider the problem of computing the density of states of a Boolean formula in CNF, a generalization of both MAXSAT and model counting. Given a Boolean formula F, its density of states counts the number of configurations that violate exactly E clauses, for all values of E. We propose a novel Markov Chain Monte Carlo algorithm based on flat histogram methods that, despite the hardness of the problem, converges quickly to a very accurate solution. Using this method, we show the first known results on the density of states of several widely used formulas and we provide novel insights about the behavior of random 3SAT formulas around the phase transition. 1
Studies in Solution Sampling
"... We introduce novel algorithms for generating random solutions from a uniform distribution over the solutions of a boolean satisfiability problem. Our algorithms operate in two phases. In the first phase, we use a recently introduced SampleSearch scheme to generate biased samples while in the second ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We introduce novel algorithms for generating random solutions from a uniform distribution over the solutions of a boolean satisfiability problem. Our algorithms operate in two phases. In the first phase, we use a recently introduced SampleSearch scheme to generate biased samples while in the second phase we correct the bias by using either Sampling/Importance Resampling or the MetropolisHastings method. Unlike stateoftheart algorithms, our algorithms guarantee convergence in the limit. Our empirical results demonstrate the superior performance of our new algorithms over several competing schemes.
Uniform Solution Sampling Using a Constraint Solver As an Oracle
"... We consider the problem of sampling from solutions defined by a set of hard constraints on a combinatorial space. We propose a new sampling technique that, while enforcing a uniform exploration of the search space, leverages the reasoning power of a systematic constraint solver in a blackbox scheme ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of sampling from solutions defined by a set of hard constraints on a combinatorial space. We propose a new sampling technique that, while enforcing a uniform exploration of the search space, leverages the reasoning power of a systematic constraint solver in a blackbox scheme. We present a series of challenging domains, such as energy barriers and highly asymmetric spaces, that reveal the difficulties introduced by hard constraints. We demonstrate that standard approaches such as Simulated Annealing and Gibbs Sampling are greatly affected, while our new technique can overcome many of these difficulties. Finally, we show that our sampling scheme naturally defines a new approximate model counting technique, which we empirically show to be very accurate on a range of benchmark problems.
Just Count the Satisfied Groundings: Scalable LocalSearch and Sampling Based Inference in MLNs
"... The main computational bottleneck in various sampling based and localsearch based inference algorithms for Markov logic networks (e.g., Gibbs sampling, MCSAT, MaxWalksat, etc.) is computing the number of groundings of a firstorder formula that are true given a truth assignment to all of its groun ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The main computational bottleneck in various sampling based and localsearch based inference algorithms for Markov logic networks (e.g., Gibbs sampling, MCSAT, MaxWalksat, etc.) is computing the number of groundings of a firstorder formula that are true given a truth assignment to all of its ground atoms. We reduce this problem to the problem of counting the number of solutions of a constraint satisfaction problem (CSP) and show that during their execution, both sampling based and localsearch based algorithms repeatedly solve dynamic versions of this counting problem. Deriving from the vast amount of literature on CSPs and graphical models, we propose an exact junctiontree based algorithm for computing the number of solutions of the dynamic CSP, analyze its properties, and show how it can be used to improve the computational complexity of Gibbs sampling and MaxWalksat. Empirical tests on a variety of benchmarks clearly show that our new approach is several orders of magnitude more scalable than existing approaches.