Results 1 
9 of
9
Using Weighted MaxSat Engines to Solve MPE
 Proc. 18th Nat’l Conf. Artificial Intelligence
, 2002
"... Logical and probabilistic reasoning are closely related. Many examples in each group have natural analogs in the other. One example is the strong relationship between weighted MAXSAT and MPE. This paper presents a simple reduction of MPE to weighted MAXSAT. It also investigates approximating MPE b ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
Logical and probabilistic reasoning are closely related. Many examples in each group have natural analogs in the other. One example is the strong relationship between weighted MAXSAT and MPE. This paper presents a simple reduction of MPE to weighted MAXSAT. It also investigates approximating MPE by converting it to a weighted MAXSAT problem, then using the incomplete methods for solving weighted MAXSAT to generate a solution. We show that converting MPE problems to MAXSAT problems and using a method designed for MAXSAT to solve them often produces solutions that are vastly superior to the previous local search methods designed directly for the MPE problem. SAT is the problem of taking a set of clauses with associated weights, and finding the instantiation that produces the largest sum of the weights of satisfied clauses. Weighted MAXSAT is used for example to resolve conflicts in a knowledge base. Finding approximate solutions to weighted MAXSAT has received significant research attention, and novel algorithms have been developed that have proved to be very successful. This paper investigates using local search algorithms developed for weighted MAXSAT and applying them to approximately solve MPE. Local search is a general optimization technique which can be used alone or as a method for improving solutions found by other approximation methods. We compare two successful local search algorithms in the MAXSAT domain ( Discrete Lagrangian Multipliers (Wah & Shang 1997), and Guided Local Search (Mills & Tsang 2000) ) to the local search method proposed for MPE (Kask &Dechter 1999). For large problems, the MAXSAT algorithms proved to be significantly more powerful, typically providing instantiations that are orders of magnitude more probable. The paper is organized as follows: First, we formally introduce the MPE and MAXSAT problems. Then we present the reduction of MPE to MAXSAT. We then introduce the MAXSAT algorithms that will be evaluated. Finally, we provide experimental results comparing the solution quality of MPE approximations using the MAXSAT methods to the previously proposed local search method developed for MPE.
MAP Complexity Results and Approximation Methods
 IN PROCEEDINGS OF THE 18TH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI
, 2002
"... MAP is the problem of finding a most probable instantiation of a set of variables in a Bayesian network, given some evidence. MAP appears ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
MAP is the problem of finding a most probable instantiation of a set of variables in a Bayesian network, given some evidence. MAP appears
Partial implicit unfolding in the davisputnam procedure for quantified boolean formulae
 In Proceedings of the International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR’01
, 2001
"... Abstract. Quantified Boolean formulae offer a means of representing many propositional formula exponentially more compactly than propositional logic. Recent work on automating reasoning with QBF has concentrated on extending the DavisPutnam procedure to handle QBF. Although the resulting procedures ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Abstract. Quantified Boolean formulae offer a means of representing many propositional formula exponentially more compactly than propositional logic. Recent work on automating reasoning with QBF has concentrated on extending the DavisPutnam procedure to handle QBF. Although the resulting procedures make it possible to evaluate QBF that could not be efficiently reduced to propositional logic (requiring worstcase exponential space), its efficiency often lags much behind the reductive approach when the reduction is possible. We attribute this inefficiency to the fact that many of the unit resolution steps possible in the reduced (propositional logic) formula are not performed in the corresponding QBF. To combine the conciseness of the QBF representation and the stronger inferences available in the unquantified representation, we introduce a stronger propagation algorithm for QBF which could be seen as partially unfolding the universal quantification. The algorithm runs in worstcase exponential time, like the reduction of QBF to propositional logic, but needs only polynomial space. By restricting the algorithm the exponential behavior can be avoided while still preserving many of the useful inferences. 1
Complexity results and approximation strategies for map explanations
 Journal of Artificial Intelligence Research
, 2004
"... MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation (Pr), or the problem of computing the most probable explanatio ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation (Pr), or the problem of computing the most probable explanation (MPE). This paper investigates the complexity of MAP in Bayesian networks. Specifically, we show that MAP is complete for NP PP and provide further negative complexity results for algorithms based on variable elimination. We also show that MAP remains hard even when MPE and Pr become easy. For example, we show that MAP is NPcomplete when the networks are restricted to polytrees, and even then can not be effectively approximated. Given the difficulty of computing MAP exactly, and the difficulty of approximating MAP while providing useful guarantees on the resulting approximation, we investigate best effort approximations. We introduce a generic MAP approximation framework. We provide two instantiations of the framework; one for networks which are amenable to exact inference (Pr), and one for networks for which even exact inference is too hard. This allows MAP approximation on networks that are too complex to even exactly solve the easier problems, Pr and MPE. Experimental results indicate that using these approximation algorithms provides much better solutions than standard techniques, and provide accurate MAP estimates in many cases. 1.
Performing bayesian inference by weighted model counting
 In Proceedings of the National Conference on Artificial Intelligence (AAAI
, 2005
"... Over the past decade general satisfiability testing algorithms have proven to be surprisingly effective at solving a wide variety of constraint satisfaction problem, such as planning and scheduling (Kautz and Selman 2003). Solving such NPcomplete tasks by “compilation to SAT ” has turned out to be a ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Over the past decade general satisfiability testing algorithms have proven to be surprisingly effective at solving a wide variety of constraint satisfaction problem, such as planning and scheduling (Kautz and Selman 2003). Solving such NPcomplete tasks by “compilation to SAT ” has turned out to be an approach that is of both practical and theoretical interest. Recently, (Sang et al. 2004) have shown that state of the art SAT algorithms can be efficiently extended to the harder task of counting the number of models (satisfying assignments) of a formula, by employing a technique called component caching. This paper begins to investigate the question of whether “compilation to modelcounting ” could be a practical technique for solving realworld #Pcomplete problems, in particular Bayesian inference. We describe an efficient translation from Bayesian networks to weighted model counting, extend the best modelcounting algorithms to weighted model counting, develop an efficient method for computing all marginals in a single counting pass, and evaluate the approach on computationally challenging reasoning problems.
Phase Transitions of PPComplete Satisfiability Problems
 Proc. 17th IJCAI2001
, 2001
"... The complexity class PP consists of all decision problems solvable by polynomialtime probabilistic Turing machines. It is well known that PP is a highly intractable complexity class and that PPcomplete problems are in all likelihood harder than NPcomplete problems. We investigate the existence of ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The complexity class PP consists of all decision problems solvable by polynomialtime probabilistic Turing machines. It is well known that PP is a highly intractable complexity class and that PPcomplete problems are in all likelihood harder than NPcomplete problems. We investigate the existence of phase transitions for a family of PPcomplete Boolean satisfiability problems under the fixed clausestovariables ratio model. A typical member of this family is the decision problem #3SAT( 2 n=2 ): given a 3CNFformula, is it satisfied by at least the squareroot of the total number of possible truth assignments? We provide evidence to the effect that there is a critical ratio r 3
Robust FPGA Resynthesis Based on FaultTolerant Boolean Matching
"... We present FPGA logic synthesis algorithms for stochastic fault rate reduction in the presence of both permanent and transient defects. We develop an algorithm for fault tolerant Boolean matching (FTBM), which exploits the flexibility of the LUT configuration to maximize the stochastic yield rate fo ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We present FPGA logic synthesis algorithms for stochastic fault rate reduction in the presence of both permanent and transient defects. We develop an algorithm for fault tolerant Boolean matching (FTBM), which exploits the flexibility of the LUT configuration to maximize the stochastic yield rate for a logic function. Using FTBM, we propose a robust resynthesis algorithm (ROSE) which maximizes stochastic yield rate for an entire circuit. Finally, we show that existing PLB (programmable logic block) templates for areaaware Boolean matching and logic resynthesis are not effective for fault tolerance, and propose a new robust template with path reconvergence. Compared to the stateoftheart academic technology mapper Berkeley ABC, ROSE using the proposed robust PLB template reduces the fault rate by 25 % with 1 % fewer LUTs, and increases MTBF (mean time between failures) by 31%, while preserving the optimal logic depth.
The Complexity of Model Aggregation
 In Proceedings of the Fifth International Conference on Artificial Intelligence Planning Systems (AIPS
, 2000
"... We show that the problem of transforming a structured Markov decision process (MDP) into a Bounded Interval MDP is coNP PP hard. In particular, the test for fflhomogeneity, a necessary part of verifying any proposed partition, is coNP PP complete. This indicates that, without further ass ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We show that the problem of transforming a structured Markov decision process (MDP) into a Bounded Interval MDP is coNP PP hard. In particular, the test for fflhomogeneity, a necessary part of verifying any proposed partition, is coNP PP complete. This indicates that, without further assumptions on the sorts of partitioning allowed or the structure of the original propositional MDP, this is not likely to be a practical approach. We also analyze the complexity of finding the minimalsize partition, and of the kblock partition existence problem. Finally, we show that the test for homogeneity of an exact partition is complete for coNP C=P , which is the same class as coNP PP . Introduction Markov decision processes (MDPs, formally defined below) are ubiquitous in AI and in the world of mathematical modeling. Related research concerns learning models and/or policies for complex systems, and developing algorithms and heuristics for planning and intelligent control....