Results 1  10
of
44
Exploiting Causal Independence in Bayesian Network Inference
 Journal of Artificial Intelligence Research
, 1996
"... A new method is proposed for exploiting causal independencies in exact Bayesian network inference. ..."
Abstract

Cited by 160 (9 self)
 Add to MetaCart
(Show Context)
A new method is proposed for exploiting causal independencies in exact Bayesian network inference.
Proximity search in databases
 In VLDB
, 1998
"... An information retrieval (IR) engine can rank documents based on textual proximityofkeywords within each document. In this paper we apply this notion to search across an entire database for objects that are \near " other relevant objects. Proximity search enables simple \focusing " ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
An information retrieval (IR) engine can rank documents based on textual proximityofkeywords within each document. In this paper we apply this notion to search across an entire database for objects that are \near &quot; other relevant objects. Proximity search enables simple \focusing &quot; queries based on general relationships among objects, helpful for interactive query sessions. We view the database as a graph, with data in vertices (objects) and relationships indicated by edges. Proximity is dened based on shortest paths between objects. We have implemented a prototype search engine that uses this model to enable keyword searches over databases, and we have found it very e ective for quickly nding relevant information. Computing the distance between objects in a graph stored on disk can be very expensive. Hence, we show how to build compact indexes that allow us to quickly nd the distance between objects at search time. Experiments show that our algorithms are ecient and scale well. 1
Axioms of Causal Relevance
 Artificial Intelligence
, 1996
"... This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z constant." The axiomization of causal irrelevance is contrasted with the axiomization ..."
Abstract

Cited by 52 (13 self)
 Add to MetaCart
This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in "Learning X will not alter our belief in Y , once we know Z." Two versions of causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems abou...
A Computational Theory of Decision Networks
 International Journal of Approximate Reasoning
, 1994
"... This paper is about how to represent and solve decision problems in Bayesian decision theory (e.g. [6]). A general representation named decision networks is proposed based on influence diagrams [10]. This new representation incorporates the idea, from Markov decision process (e.g. [5]), that a decis ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
This paper is about how to represent and solve decision problems in Bayesian decision theory (e.g. [6]). A general representation named decision networks is proposed based on influence diagrams [10]. This new representation incorporates the idea, from Markov decision process (e.g. [5]), that a decision may be conditionally independent of certain pieces of available information. It also allows multiple cooperative agents and facilitates the exploitation of separability in the utility function. Decision networks inherit the advantages of both influence diagrams and Markov decision processes, which makes them a better representation framework for decision analysis, planning under uncertainty, medical diagnosis and treatment.
Graphical Explanation in Belief Networks
 In Journal of Computational and Graphical Statistics
, 1997
"... Belief networks provide an important bridge between statistical modeling and expert systems. In this paper we present methods for visualizing probabilistic "evidence flows" in belief networks, thereby enabling belief networks to explain their behavior. Building on earlier research on expla ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Belief networks provide an important bridge between statistical modeling and expert systems. In this paper we present methods for visualizing probabilistic "evidence flows" in belief networks, thereby enabling belief networks to explain their behavior. Building on earlier research on explanation in expert systems, we present a hierarchy of explanations, ranging from simple colorings to detailed displays. Our approach complements parallel work on textual explanations in belief networks. GRAPHICALBELIEF, Mathsoft Inc.'s belief network software, implements the methods. 1 Introduction A fundamental reason for building a mathematical or statistical model is to foster deeper understanding of complex, realworld systems. Consequently, explanationsdescriptions of the mechanisms which comprise such modelsform an important part of model validation, exploration, and use. Early tests of rulebased expert system models indicated the critical need for detailed explanations in that setting (...
Probabilities of Causation: Bounds and Identification
 Annals of Mathematics and Artificial Intelligence
, 2000
"... This paper deals with the problem of estimating the probability of causation, that is, the probability that one event was the real cause of another, in a given scenario. Starting from structuralsemantical definitions of the probabilities of necessary or sufficient causation (or both), we show h ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
This paper deals with the problem of estimating the probability of causation, that is, the probability that one event was the real cause of another, in a given scenario. Starting from structuralsemantical definitions of the probabilities of necessary or sufficient causation (or both), we show how to bound these quantities from data obtained in experimental and observational studies, under general assumptions concerning the datagenerating process. In particular, we strengthen the results of Pearl (1999) by presenting sharp bounds based on combined experimental and nonexperimental data under no process assumptions, as well as under the mild assumptions of exogeneity (no confounding) and monotonicity (no prevention). These results delineate more precisely the basic assumptions that must be made before statistical measures such as the excessriskratio could be used for assessing attributional quantities such as the probability of causation. 1
GeNIeRate: An interactive generator of diagnostic Bayesian network models
 16th International Workshop on Principles of Diagnosis
, 2005
"... We propose a methodology to simplify and speed up the design of very large Bayesian network models. The models produced using our methodology are based on two simplifying assumptions: (1) the structure of the model has three layers of variables and (2) the interaction among the variables can be mode ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We propose a methodology to simplify and speed up the design of very large Bayesian network models. The models produced using our methodology are based on two simplifying assumptions: (1) the structure of the model has three layers of variables and (2) the interaction among the variables can be modeled by canonical models such as the NoisyMAX gate. The methodology is implemented in an application named GeNIeRate, which aims at supporting construction of diagnostic Bayesian network models consisting of hundreds or even thousands of variables. Preliminary qualitative evaluation of GeNIeRate shows great promise. We conducted an experiment comparing our approach to traditional techniques for building Bayesian network models by rebuilding a Bayesian network model for diagnosis of liver disorders, HEPARII. We found that the performance of the model created with GeNIeRate is comparable to the performance of the original HEPARII. 1
Probabilities of causation: Three counterfactual interpretations and their identification
 SYNTHESE
, 1999
"... According to common judicial standard, judgment in favor of plaintiff should be made if and only if it is "more probable than not" that the defendant's action was the cause for the plaintiff's damage (or death). This paper provides formal semantics, based on structural models ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
According to common judicial standard, judgment in favor of plaintiff should be made if and only if it is "more probable than not" that the defendant's action was the cause for the plaintiff's damage (or death). This paper provides formal semantics, based on structural models of counterfactuals, for the probability that event x was a necessary or sufficient cause (or both) of another event y. The paper then explicates conditions under which the probability of necessary (or sufficient) causation can be learned from statistical data, and shows how data from both experimental and nonexperimental studies can be combined to yield information that neither study alone can provide. Finally,weshow that necessity and sufficiency are two independent aspects of causation, and that both should be invoked in the construction of causal explanations for specific scenarios.
An efficient factorization for the noisy MAX
 International Journal of Intelligent Systems
"... Díez’s algorithm for the noisy MAX is very efficient for polytrees, but when the network has loops it has to be combined with local conditioning, a suboptimal propagation algorithm. Other algorithms, based on several factorizations of the conditional probability of the noisy MAX, are not as efficien ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Díez’s algorithm for the noisy MAX is very efficient for polytrees, but when the network has loops it has to be combined with local conditioning, a suboptimal propagation algorithm. Other algorithms, based on several factorizations of the conditional probability of the noisy MAX, are not as efficient for polytrees, but can be combined with general propagation algorithms, such as clustering or variable elimination, which are more efficient for networks with loops. In this paper we propose a new factorization of the noisy MAX that amounts to Díez’s algorithm in the case of polytrees and at the same time is more efficient than previous factorizations when combined with either variable elimination or clustering. 1