Results 1  10
of
1,044
On the robustness of Most Probable Explanations
 In Proceedings of the Twenty Second Conference on Uncertainty in Artificial Intelligence
"... In Bayesian networks, a Most Probable Explanation (MPE) is a complete variable instantiation with the highest probability given the current evidence. In this paper, we discuss the problem of finding robustness conditions of the MPE under single parameter changes. Specifically, we ask the question: H ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In Bayesian networks, a Most Probable Explanation (MPE) is a complete variable instantiation with the highest probability given the current evidence. In this paper, we discuss the problem of finding robustness conditions of the MPE under single parameter changes. Specifically, we ask the question
Structure approximation of most probable explanations in Bayesian networks.
 In
, 2013
"... Abstract Typically, when one discusses approximation algorithms for (NPhard) problems (like TRAVELING SALESPERSON, VERTEX COVER, KNAPSACK), one refers to algorithms that return a solution whose value is (at least ideally) close to optimal; e.g., a tour with almost minimal length, a vertex cover o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
to the optimal tour, a vertex cover that differs in only a few vertices from the optimal cover, or a collection that is similar to the optimal collection. In this paper, we discuss structureapproximation of the problem of finding the most probable explanation of observations in Bayesian networks, i.e., finding
Study of the Most Probable Explanation in Hybrid Bayesian Networks
"... In addition to computing the posterior distributions for hidden variables in Bayesian networks, one other important inference task is to find the most probable explanation (MPE). MPE provides the most likely configurations to explain away the evidence and helps to manage hypotheses for decision mak ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In addition to computing the posterior distributions for hidden variables in Bayesian networks, one other important inference task is to find the most probable explanation (MPE). MPE provides the most likely configurations to explain away the evidence and helps to manage hypotheses for decision
BestFirst AND/OR Search for Most Probable Explanations
 UAI
, 2007
"... The paper evaluates the power of bestfirst search over AND/OR search spaces for solving the Most Probable Explanation (MPE) task in Bayesian networks. The main virtue of the AND/OR representation of the search space is its sensitivity to the structure of the problem, which can translate into signif ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The paper evaluates the power of bestfirst search over AND/OR search spaces for solving the Most Probable Explanation (MPE) task in Bayesian networks. The main virtue of the AND/OR representation of the search space is its sensitivity to the structure of the problem, which can translate
The complexity of finding kth most probable explanations in probabilistic networks
 In Cerna
, 2011
"... Abstract. In modern decisionsupport systems, probabilistic networks model uncertainty by a directed acyclic graph quantified by probabilities. Two closely related problems on these networks are the Kth MPE and Kth Partial MAP problems, which both take a network and a positive integer k for their i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
for their input. In the Kth MPE problem, given a partition of the network's nodes into evidence and explanation nodes and given specific values for the evidence nodes, we ask for the kth most probable combination of values for the explanation nodes. In the Kth Partial MAP problem in addition a number
Abstraction for Efficiently Computing Most Probable Explanations in Bayesian Networks
"... Two factors that may severly slow down computation of answers to Bayesian network queries are high graph connectivity (potentially causing high treewidth) and high node cardinalities. In this paper, where we address the problem of high node cardinalities by means of abstraction, two contributions ar ..."
Abstract
 Add to MetaCart
are made. First, we formulate abstraction in Bayesian network by means of set partitioning, and make connections to previous work using abstraction hierarchies. Second, we investigate the computation of most probable explanations (MPEs) in Bayesian networks, when some nodes are abstracted. In particular
Stochastic Local Search for Solving the Most Probable Explanation Problem in Bayesian Networks
, 2004
"... In this thesis, we develop and study novel Stochastic Local Search (SLS) algorithms for solving the Most Probable Explanation (MPE) problem in graphical models, that is, to find the most probable instantiation of all variables V in the model, given the observed values E = e of a subset E of V. SLS a ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
In this thesis, we develop and study novel Stochastic Local Search (SLS) algorithms for solving the Most Probable Explanation (MPE) problem in graphical models, that is, to find the most probable instantiation of all variables V in the model, given the observed values E = e of a subset E of V. SLS
Abstraction for belief revision: Using a genetic algorithm to compute the most probable explanation
 In Proc. 1998 AAAI Spring Symposium on Satiscing Models
, 1998
"... A belief network can create a compelling model of an agent’s uncertain environment. Exact belief network inference, including computing the most probable explanation, can be computationally hard. Therefore, it is interesting to perform inference on an approximate belief network rather than on the o ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
A belief network can create a compelling model of an agent’s uncertain environment. Exact belief network inference, including computing the most probable explanation, can be computationally hard. Therefore, it is interesting to perform inference on an approximate belief network rather than
Portfolios in Stochastic Local Search: Efficiently Computing Most Probable Explanations in Bayesian Networks
"... In this article we investigate the use of portfolios (or collections) of heuristics when solving computationally hard problems using stochastic local search. We consider uncertainty reasoning, specifically the computation of most probable explanations in Bayesian networks (BNs). Our contribution is ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this article we investigate the use of portfolios (or collections) of heuristics when solving computationally hard problems using stochastic local search. We consider uncertainty reasoning, specifically the computation of most probable explanations in Bayesian networks (BNs). Our contribution
Results 1  10
of
1,044