Results 1  10
of
29
Causes and explanations: A structuralmodel approach
 In Proceedings IJCAI01
, 2001
"... We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions ..."
Abstract

Cited by 180 (12 self)
 Add to MetaCart
We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions
The Complexity of Causality and Responsibility for Query Answers and nonAnswers
"... An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consi ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
(Show Context)
An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consists, in general, of only a fragment of the lineage. In this paper, we adapt Halpern, Pearl, and Chockler’s recent definitions of causality and responsibility to define the causes of answers and nonanswers to queries, and their degree of responsibility. Responsibility captures the notion of degree of causality and serves to rank potentially many causes by their relative contributions to the effect. Then, we study the complexity of computing causes and responsibilities for conjunctive queries. It is known that computing causes is NPcomplete in general. Our first main result shows that all causes to conjunctive queries can be computed by a relational query which may involve negation. Thus, causality can be computed in PTIME, and very efficiently so. Next, we study computing responsibility. Here, we prove that the complexity depends on the conjunctive query and demonstrate a dichotomy between PTIME and NPcomplete cases. For the PTIME cases, we give a nontrivial algorithm, consisting of a reduction to the maxflow computation problem. Finally, we prove that, even when it is in PTIME, responsibility is complete for LOGSPACE, implying that, unlike causality, it cannot be computed by a relational query. 1.
Explaining Counterexamples Using Causality
"... Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification i ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a nontrivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality, introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomialtime algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a lightweight external layer to any model checking tool, or used to explain simulation traces. 1
Defaults and Normality in Causal Structures
"... A serious defect with the HalpernPearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causalit ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
A serious defect with the HalpernPearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright’s NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition. 1
Clarifying the usage of structural models for commonsense causal reasoning
 In Proc. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning
, 2003
"... Recently, Halpern and Pearl proposed a definition of actual cause within the framework of structural models. In this paper, we explicate some of the assumptions underlying their definition, and reevaluate the effectiveness of their account. We also briefly contemplate the suitability of structural ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Recently, Halpern and Pearl proposed a definition of actual cause within the framework of structural models. In this paper, we explicate some of the assumptions underlying their definition, and reevaluate the effectiveness of their account. We also briefly contemplate the suitability of structural models as a language for expressing subtle notions of commonsense causation.
Structurebased causes and explanations in the independent choice logic
 Proceedings UAI2003
, 2003
"... This paper is directed towards combining Pearl’s structuralmodel approach to causal reasoning with highlevel formalisms for reasoning about actions. More precisely, we present a combination of Pearl’s structuralmodel approach with Poole’s independent choice logic. We show how probabilistic theor ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
This paper is directed towards combining Pearl’s structuralmodel approach to causal reasoning with highlevel formalisms for reasoning about actions. More precisely, we present a combination of Pearl’s structuralmodel approach with Poole’s independent choice logic. We show how probabilistic theories in the independent choice logic can be mapped to probabilistic causal models. This mapping provides the independent choice logic with appealing concepts of causality and explanation from the structuralmodel approach. We illustrate this along Halpern and Pearl’s sophisticated notions of actual cause, explanation, and partial explanation. Furthermore, this mapping also adds firstorder modeling capabilities and explicit actions to the structuralmodel approach.
From probabilistic counterexamples via causality to fault trees
, 2011
"... Abstract. In recent years, several approaches to generate probabilistic counterexamples have been proposed. The interpretation of stochastic counterexamples, however, continues to be problematic since they have to be represented as sets of paths, and the number of paths in this set may be very large ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
(Show Context)
Abstract. In recent years, several approaches to generate probabilistic counterexamples have been proposed. The interpretation of stochastic counterexamples, however, continues to be problematic since they have to be represented as sets of paths, and the number of paths in this set may be very large. Fault trees (FTs) are a wellestablished industrial technique to represent causalities for possible system hazards resulting from system or system component failures. In this paper we suggest a method to automatically derive FTs from counterexamples, including a mapping of the probability information onto the FT. We extend the structural equation approach by Pearl and Halpern, which is based on Lewis counterfactuals, so that it serves as a justification for the causality that our proposed FT derivation rules imply. We demonstrate the usefulness of our approach by applying it to an industrial case study. 1
Tracing Data Errors with ViewConditioned Causality
, 2011
"... A surprising query result is often an indication of errors in the query or the underlying data. Recent work suggests using causal reasoning to find explanations for the surprising result. In practice, however, one often has multiple queries and/or multiple answers, some of which may be considered co ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
A surprising query result is often an indication of errors in the query or the underlying data. Recent work suggests using causal reasoning to find explanations for the surprising result. In practice, however, one often has multiple queries and/or multiple answers, some of which may be considered correct and others unexpected. In this paper, we focus on determining the causes of a set of unexpected results, possibly conditioned on some prior knowledge of the correctness of another set of results. We call this problem ViewConditioned Causality. We adapt the definitions of causality and responsibility for the case of multiple answers/views and provide a nontrivial algorithm that reduces the problem of finding causes and their responsibility to a satisfiability problem that can be solved with existing tools. We evaluate both the accuracy and effectiveness of our approach on a real dataset of usergenerated mobile device tracking data, and demonstrate that it can identify causes of error more effectively than static Boolean influence and alternative notions of causality.
Causes and Explanations in the StructuralModel Approach: Tractable Cases
 IN PROC. EIGHTEENTH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2002
, 2002
"... In this paper, we continue our research on the algorithmic aspects of Halpern and Pearl's causes and explanations in the structuralmodel approach. To this end, we present new characterizations of weak causes for certain classes of causal models, which show that under suitable restrictions deci ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we continue our research on the algorithmic aspects of Halpern and Pearl's causes and explanations in the structuralmodel approach. To this end, we present new characterizations of weak causes for certain classes of causal models, which show that under suitable restrictions deciding causes and explanations is tractable. To our knowledge, these are the first explicit tractability results for the structuralmodel approach.