Results 1 
6 of
6
Most Relevant Explanation: Properties, Algorithms, and Evaluations
"... Most Relevant Explanation (MRE) is a method for nding multivariate explanations for given evidence in Bayesian networks [12]. This paper studies the theoretical properties of MRE and develops an algorithm for nding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relev ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Most Relevant Explanation (MRE) is a method for nding multivariate explanations for given evidence in Bayesian networks [12]. This paper studies the theoretical properties of MRE and develops an algorithm for nding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relevance measure in automatically identifying the most relevant target variables and pruning less relevant variables from an explanation. The soft measure also enables MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, our study shows that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the solutions. A KMRE algorithm based on these dominance relations is developed for generating a set of top solutions that are more representative. Our empirical results show that MRE methods are promising approaches for explanation in Bayesian networks. 1
Learning optimal Bayesian networks with heuristic search
 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, MISSISSIPPI STATE UNIVERSITY
, 2012
"... Bayesian networks are a widely used graphical model which formalize reasoning under uncertainty. Unfortunately, construction of a Bayesian network by an expert is timeconsuming, and, in some cases, all experts may not agree on the best structure for a problem domain. Additionally, for some complex ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Bayesian networks are a widely used graphical model which formalize reasoning under uncertainty. Unfortunately, construction of a Bayesian network by an expert is timeconsuming, and, in some cases, all experts may not agree on the best structure for a problem domain. Additionally, for some complex systems such as those present in molecular biology, experts with an understanding of the entire domain and how individual components interact may not exist. In these cases, we must learn the network structure from available data. This dissertation focuses on scorebased structure learning. In this context, a scoring function is used to measure the goodness of fit of a structure to data. The goal is to find the structure which optimizes the scoring function. The first contribution of this dissertation is a shortestpath finding perspective for the problem of learning optimal Bayesian network structures. This perspective builds on earlier dynamic programming strategies, but, as we show, offers much more flexibility. Second, we develop a set of data structures to improve the efficiency of many of the
Some properties of Most Relevant Explanation
 In Proceedings of the 21st International Joint Conference on Artificial Intelligence ExaCt Workshop (ExaCt09
, 2009
"... Abstract. This paper provides a study of the theoretical properties of Most Relevant Explanation (MRE) [12]. The study shows that MRE defines an implicit soft relevance measure that enables automatic pruning of less relevant or irrelevant variables when generating explanations. The measure also all ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper provides a study of the theoretical properties of Most Relevant Explanation (MRE) [12]. The study shows that MRE defines an implicit soft relevance measure that enables automatic pruning of less relevant or irrelevant variables when generating explanations. The measure also allows MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, we show that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the candidate solutions. 1
An Exact Algorithm for Solving Most Relevant Explanation in Bayesian Networks
"... Most Relevant Explanation (MRE) is a new inference task in Bayesian networks that finds the most relevant partial instantiation of target variables as an explanation for given evidence by maximizing the Generalized Bayes Factor (GBF). No exact algorithms have been developed for solving MRE previou ..."
Abstract
 Add to MetaCart
(Show Context)
Most Relevant Explanation (MRE) is a new inference task in Bayesian networks that finds the most relevant partial instantiation of target variables as an explanation for given evidence by maximizing the Generalized Bayes Factor (GBF). No exact algorithms have been developed for solving MRE previously. This paper fills the void and introduces a breadthfirst branchandbound MRE algorithm based on a novel upper bound on GBF. The bound is calculated by decomposing the computation of the score to a set of Markov blankets of subsets of evidence variables. Our empirical evaluations show that the proposed algorithm makes exact MRE inference tractable in Bayesian networks that could not be solved previously.
Building Bayesian Network based Expert Systems from
"... Abstract—Combining expert knowledge and user explanation with automated reasoning in domains with uncertain information poses significant challenges in terms of representation and reasoning mechanisms. In particular, reasoning structures understandable and usable by humans are often different from ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Combining expert knowledge and user explanation with automated reasoning in domains with uncertain information poses significant challenges in terms of representation and reasoning mechanisms. In particular, reasoning structures understandable and usable by humans are often different from the ones used for automated reasoning and data mining systems. Rules with certainty factors represent one possible way to express domain knowledge and build expert system that can deal with uncertainty. Although convenient to humans, this approach has limitations in accurately modeling the domain. Alternatively, a Bayesian Network allows accurate modeling of a domain and automated reasoning but its inference is less intuitive to humans. In this paper, we propose a method to combine these two frameworks to build Bayesian Networks from rules and derive user understandable explanations in terms of these rules. Expert specified rules are augmented with importance parameters for antecedents and are used to derive probabilistic bounds for the Bayesian Network’s conditional probability table. The partial structure constructed from the rules is fully learned from the data. The paper also discusses methods for using the rules to provide user understandable explanations, identify incorrect rules, suggest new rules and perform incremental learning.
Computational Complexity and Approximation Methods of Most Relevant Explanation
"... Most Relevant Explanation (MRE) is a new approach to generating explanations for given evidence in Bayesian networks. MRE has a solution space containing all the partial instantiations of target variables and is extremely hard to solve. We show in this paper that the decision problem of MRE is NP P ..."
Abstract
 Add to MetaCart
(Show Context)
Most Relevant Explanation (MRE) is a new approach to generating explanations for given evidence in Bayesian networks. MRE has a solution space containing all the partial instantiations of target variables and is extremely hard to solve. We show in this paper that the decision problem of MRE is NP P Pcomplete. For large Bayesian networks, approximate methods may be the only feasible solutions. We observe that the solution space of MRE has a special lattice structure that connects all the potential solutions together. The connectivity motivates us to develop several efficient local search methods for solving MRE. Empirical results show that these methods can efficiently find the optimal MRE solutions for majority of the test cases in our experiments. 1