Results 1  10
of
36
Causes and explanations: A structuralmodel approach
 In Proceedings IJCAI01
, 2001
"... We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions ..."
Abstract

Cited by 118 (9 self)
 Add to MetaCart
We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions
Axiomatizing causal reasoning
 Uncertainty in Artificial Intelligence
, 1998
"... Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutio ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl (1997, 1998). In addition, the complexity of the decision procedures is characterized for all the languages and classes of models considered. 1.
Efficient Reasoning in Qualitative Probabilistic Networks
 In Proceedings of the 11th National Conference on Artificial Intelligence (AAAI93
, 1993
"... Qualitative Probabilistic Networks (QPNs) are an abstraction of Bayesian belief networks replacing numerical relations by qualitative influences and synergies [ Wellman, 1990b ] . To reason in a QPN is to find the effect of new evidence on each node in terms of the sign of the change in belief (incr ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
Qualitative Probabilistic Networks (QPNs) are an abstraction of Bayesian belief networks replacing numerical relations by qualitative influences and synergies [ Wellman, 1990b ] . To reason in a QPN is to find the effect of new evidence on each node in terms of the sign of the change in belief (increase or decrease). We introduce a polynomial time algorithm for reasoning in QPNs, based on local sign propagation. It extends our previous scheme from singly connected to general multiply connected networks. Unlike existing graphreduction algorithms, it preserves the network structure and determines the effect of evidence on all nodes in the network. This aids metalevel reasoning about the model and automatic generation of intuitive explanations of probabilistic reasoning. Introduction A formal representation should not use more specificity than needed to support the reasoning required of it. The appropriate degree of specificity or numerical precision will vary depending on what kind o...
Elicitation of Probabilities for Belief Networks: Combining Qualitative and . . .
 IN UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (95): PROCEEDINGS OF THE 11TH CONFERENCE, LOS ALTOS CA
, 1995
"... Although the usefulness of belief networks for reasoning under uncertainty is widely accepted, obtaining numerical probabilities that they require is still perceived a major obstacle. Often not enough statistical data is available to allow for reliable probability estimation. Available informa ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Although the usefulness of belief networks for reasoning under uncertainty is widely accepted, obtaining numerical probabilities that they require is still perceived a major obstacle. Often not enough statistical data is available to allow for reliable probability estimation. Available information may not be directly amenable for encoding in the network. Finally, domain experts may be reluctant to provide numerical probabilities. In this paper, we propose a method for elicitation of probabilities from a domain expert that is noninvasive and accommodates whatever probabilistic information the expert is willing to state. We express all available information, whether qualitative or quantitative in nature, in a canonical form consisting of (in)equalities expressing constraints on the hyperspace of possible joint probability distributions. We then use this canonical form to derive secondorder probability distributions over the desired probabilities.
Qualitative Verbal Explanations in Bayesian Belief Networks
, 1996
"... Application of Bayesian belief networks in systems that interact directly with human users, such as decision support systems, requires effective user interfaces. The principal task of such interfaces is bridging the gap between probabilistic models and human intuitive approaches to modeling uncer ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
Application of Bayesian belief networks in systems that interact directly with human users, such as decision support systems, requires effective user interfaces. The principal task of such interfaces is bridging the gap between probabilistic models and human intuitive approaches to modeling uncertainty. We describe several methods for automatic generation of qualitative verbal explanations in systems based on Bayesian belief networks. We show simple techniques for explaining the structure of a belief network model and the interactions among its variables. We also present a technique for generating qualitative explanations of reasoning. Keywords: Explanation, Bayesian belief networks, qualitative probabilistic networks 1 Introduction The purpose of computing is insight, not numbers. Richard Wesley Hamming As the increasing number of successful applications in such domains as diagnosis, planning, learning, vision, and natural language processing demonstrates, Bayesian belief ne...
Intercausal Reasoning with Uninstantiated Ancestor Nodes
 In Proceedings of the Ninth Annual Conference on Uncertainty in Artificial Intelligence (UAI93
, 1993
"... Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal rea ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal reasoning where there are additional uninstantiated causes of the common effect. We propose a new definition of product synergy and prove its adequacy for intercausal reasoning with direct and indirect evidence for the common effect. The new definition is based on a new property matrix half positive semidefiniteness, a weakened form of matrix positive semidefiniteness. 1
Defining Explanation in Probabilistic Systems
 In Proc. UAI97
, 1997
"... As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to expla ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature one due to G ardenfors and one due to Pearland show that both suffer from significant problems. We propose an approach to defining a notion of "better explanation" that combines some of the features of both together with more recent work by Pearl and others on causality. 1 INTRODUCTION Probabilistic inference is often hard for humans to understand. Even a simple inference in a small domain may seem counterintuitive and surprising; the situation only gets worse for large and complex domains. Thus, a system doing probabilistic inference must be able to explain its findings and recommendations to evoke confidence on the part of the user. Indeed, in experiments wi...
A Review of Explanation Methods for Bayesian Networks
 Knowledge Engineering Review
, 2000
"... One of the key factors for the acceptance of expert systems in real world domains is the capability to explain their reasoning. This paper describes the basic properties that characterize explanation methods and reviews the methods developed up to date for explanation in Bayesian networks. ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
One of the key factors for the acceptance of expert systems in real world domains is the capability to explain their reasoning. This paper describes the basic properties that characterize explanation methods and reviews the methods developed up to date for explanation in Bayesian networks.
Graphical Explanation in Belief Networks
 In Journal of Computational and Graphical Statistics
, 1997
"... Belief networks provide an important bridge between statistical modeling and expert systems. In this paper we present methods for visualizing probabilistic "evidence flows" in belief networks, thereby enabling belief networks to explain their behavior. Building on earlier research on explanation in ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Belief networks provide an important bridge between statistical modeling and expert systems. In this paper we present methods for visualizing probabilistic "evidence flows" in belief networks, thereby enabling belief networks to explain their behavior. Building on earlier research on explanation in expert systems, we present a hierarchy of explanations, ranging from simple colorings to detailed displays. Our approach complements parallel work on textual explanations in belief networks. GRAPHICALBELIEF, Mathsoft Inc.'s belief network software, implements the methods. 1 Introduction A fundamental reason for building a mathematical or statistical model is to foster deeper understanding of complex, realworld systems. Consequently, explanationsdescriptions of the mechanisms which comprise such modelsform an important part of model validation, exploration, and use. Early tests of rulebased expert system models indicated the critical need for detailed explanations in that setting (...
Structurebased causes and explanations in the independent choice logic
 Proceedings UAI2003
, 2003
"... This paper is directed towards combining Pearl’s structuralmodel approach to causal reasoning with highlevel formalisms for reasoning about actions. More precisely, we present a combination of Pearl’s structuralmodel approach with Poole’s independent choice logic. We show how probabilistic theor ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
This paper is directed towards combining Pearl’s structuralmodel approach to causal reasoning with highlevel formalisms for reasoning about actions. More precisely, we present a combination of Pearl’s structuralmodel approach with Poole’s independent choice logic. We show how probabilistic theories in the independent choice logic can be mapped to probabilistic causal models. This mapping provides the independent choice logic with appealing concepts of causality and explanation from the structuralmodel approach. We illustrate this along Halpern and Pearl’s sophisticated notions of actual cause, explanation, and partial explanation. Furthermore, this mapping also adds firstorder modeling capabilities and explicit actions to the structuralmodel approach.