Results 1  10
of
26
Probabilistic Horn abduction and Bayesian networks
 Artificial Intelligence
, 1993
"... This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesia ..."
Abstract

Cited by 298 (37 self)
 Add to MetaCart
This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language. Scholar, Canadian Institute for Advanced...
Bucket Elimination: A Unifying Framework for Reasoning
"... Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination ..."
Abstract

Cited by 278 (62 self)
 Add to MetaCart
Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucketelimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called "conditioning search" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucketelimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.
Minibuckets: A general scheme for bounded inference
 Journal of the ACM (JACM
"... Abstract. This article presents a class of approximation algorithms that extend the idea of boundedcomplexity inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies create ..."
Abstract

Cited by 57 (20 self)
 Add to MetaCart
Abstract. This article presents a class of approximation algorithms that extend the idea of boundedcomplexity inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algorithms. This yields a parameterized scheme, called minibuckets, that offers adjustable tradeoff between accuracy and efficiency. The minibucket approach to optimization problems, such as finding the most probable explanation (MPE) in Bayesian networks, generates both an approximate solution and bounds on the solution quality. We present empirical results demonstrating successful performance of the proposed approximation scheme for the MPE task, both on randomly generated problems and on realistic domains such as medical diagnosis and probabilistic decoding.
An Optimal Approximation Algorithm For Bayesian Inference
 Artificial Intelligence
, 1997
"... Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all p ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all previous approximation algorithms have failed to approximate efficiently many inferences, even for belief networks without extreme conditional probabilities. We prove that we can approximate efficiently probabilistic inference in belief networks without extreme conditional probabilities. We construct a randomized approximation algorithmthe boundedvariance algorithmthat is a variant of the known likelihoodweighting algorithm. The boundedvariance algorithm is the first algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the boundedvariance algorithm, we construct a deterministic approximation algorithm u...
A General Scheme for Automatic Generation of Search Heuristics from Specification Dependencies
 Artificial Intelligence
, 2001
"... The paper presents and evaluates the power of a new scheme that generates search heuristics mechanically for problems expressed using a set of functions or relations over a finite set of variables. The heuristics are extracted from a parameterized approximation scheme called MiniBucket eliminati ..."
Abstract

Cited by 38 (18 self)
 Add to MetaCart
The paper presents and evaluates the power of a new scheme that generates search heuristics mechanically for problems expressed using a set of functions or relations over a finite set of variables. The heuristics are extracted from a parameterized approximation scheme called MiniBucket elimination that allows controlled tradeoff between computation and accuracy. The heuristics are used to guide BranchandBound and BestFirst search. Their performance is compared on two optimization tasks: the MaxCSP task defined on deterministic databases and the Most Probable Explanation task defined on probabilistic databases. Benchmarks were random data sets as well as applications to coding and medical diagnosis problems. Our results demonstrate that the heuristics generated are effective for both search schemes, permitting controlled tradeoff between preprocessing (for heuristic generation) and search.
Stochastic local search for bayesian networks
 In Workshop on AI and Statistics
, 1999
"... The paper evaluates empirically the suitability of Stochastic Local Search algorithms (SLS) for nding most probable explanations in Bayesian networks. SLS algorithms (e.g., GSAT, WSAT [16]) have recently proven to be highly e ective in solving complex constraintsatisfaction and satis ability proble ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
The paper evaluates empirically the suitability of Stochastic Local Search algorithms (SLS) for nding most probable explanations in Bayesian networks. SLS algorithms (e.g., GSAT, WSAT [16]) have recently proven to be highly e ective in solving complex constraintsatisfaction and satis ability problems which cannot be solved by traditional search schemes. Our experiments investigate the applicability of this scheme to probabilistic optimization problems. Speci cally, we show that algorithms combining hillclimbing steps with stochastic steps (guided by the network's probability distribution) called G+StS, outperform pure hillclimbing search, pure stochastic simulation search, as well as simulated annealing. In addition, variants of G+StS that are augmented on top of alternative approximation methods are shown to be particularly e ective. 1
Averagecase analysis of a search algorithm for estimating prior and posterior probabilities in Bayesian networks with extreme probabilities
, 1993
"... This paper provides a searchbased algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
This paper provides a searchbased algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency. The algorithm is most suited to the case where we have extreme (close to zero or one) probabilities, as is the case in many diagnostic situations where we are diagnosing systems that work most of the time, and for commonsense reasoning tasks where normality assumptions (allegedly) dominate. We give a characterisation of those cases where it works well, and discuss how well it can be expected to work on average. 1 Introduction This paper provides a general purpose searchbased technique for computing posterior probabilities in arbitrarily structured discrete 1 Bayesian networks. Implementations of Bayesia...
AND/OR branchandbound search for combinatorial optimization in graphical models
, 2008
"... We introduce a new generation of depthfirst BranchandBound algorithms that explore the AND/OR search tree using static and dynamic variable orderings for solving general constraint optimization problems. The virtue of the AND/OR representation of the search space is that its size may be far small ..."
Abstract

Cited by 26 (16 self)
 Add to MetaCart
We introduce a new generation of depthfirst BranchandBound algorithms that explore the AND/OR search tree using static and dynamic variable orderings for solving general constraint optimization problems. The virtue of the AND/OR representation of the search space is that its size may be far smaller than that of a traditional OR representation, which can translate into significant time savings for search algorithms. The focus of this paper is on linear space search which explores the AND/OR search tree rather than the search graph and therefore make no attempt to cache information. We investigate the power of the minibucket heuristics within the AND/OR search space, in both static and dynamic setups. We focus on two most common optimization problems in graphical models: finding the Most Probable Explanation (MPE) in Bayesian networks and solving Weighted CSPs (WCSP). In extensive empirical evaluations we demonstrate that the new AND/OR BranchandBound approach improves considerably over the traditional OR search strategy and show how various variable ordering schemes impact the performance of the AND/OR search scheme.
The use of conflicts in searching Bayesian networks
, 1993
"... This paper discusses how conflicts (as used by the consistencybased diagnosis community) can be adapted to be used in a searchbased algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime " algorithm, that at any stage can estimate the pro ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
This paper discusses how conflicts (as used by the consistencybased diagnosis community) can be adapted to be used in a searchbased algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime " algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency; this algorithm is most suited to the case with extreme probabilities. This paper presents a solution to the inefficiencies found in naive algorithms, and shows how the tools of the consistencybased diagnosis community (namely conflicts) can be used effectively to improve the efficiency. Empirical results with networks having tens of thousands of nodes are presented.
Probabilistic conflicts in a search algorithm for estimating posterior probabilities in Bayesian networks
, 1996
"... This paper presents a search algorithm for estimating posterior probabilities in discrete Bayesian networks. It shows how conflicts (as used in consistencybased diagnosis) can be adapted to speed up the search. This algorithm is especially suited to the case where there are skewed distributions, al ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
This paper presents a search algorithm for estimating posterior probabilities in discrete Bayesian networks. It shows how conflicts (as used in consistencybased diagnosis) can be adapted to speed up the search. This algorithm is especially suited to the case where there are skewed distributions, although nothing about the algorithm or the definitions depends on skewness of distributions. The general idea is to forward simulate the network, based on the `normal' values for each variable (the value with high probability given its parents). When a predicted value is at odds with the observations, we analyse which variables were responsible for the expectation failure  these form a conflict  and continue forward simulation considering different values for these variables. This results in a set of possible worlds from which posterior probabilities  together with error bounds  can be 1 derived. Empirical results with Bayesian networks having tens of thousands of nodes are presented.