Results 1 
4 of
4
The Complexity of Causality and Responsibility for Query Answers and nonAnswers
"... An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consi ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consists, in general, of only a fragment of the lineage. In this paper, we adapt Halpern, Pearl, and Chockler’s recent definitions of causality and responsibility to define the causes of answers and nonanswers to queries, and their degree of responsibility. Responsibility captures the notion of degree of causality and serves to rank potentially many causes by their relative contributions to the effect. Then, we study the complexity of computing causes and responsibilities for conjunctive queries. It is known that computing causes is NPcomplete in general. Our first main result shows that all causes to conjunctive queries can be computed by a relational query which may involve negation. Thus, causality can be computed in PTIME, and very efficiently so. Next, we study computing responsibility. Here, we prove that the complexity depends on the conjunctive query and demonstrate a dichotomy between PTIME and NPcomplete cases. For the PTIME cases, we give a nontrivial algorithm, consisting of a reduction to the maxflow computation problem. Finally, we prove that, even when it is in PTIME, responsibility is complete for LOGSPACE, implying that, unlike causality, it cannot be computed by a relational query. 1.
Explaining Counterexamples Using Causality
"... Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification i ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a nontrivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality, introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomialtime algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a lightweight external layer to any model checking tool, or used to explain simulation traces. 1
Why so? or why no? functional causality for explaining query answers
 CORR
, 2009
"... In this paper, we propose causality as a unified framework to explain query answers and nonanswers, thus generalizing and extending several previously proposed definitions of provenance and missing query result explanations. Starting from the established definition of actual causes by Halpern and ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In this paper, we propose causality as a unified framework to explain query answers and nonanswers, thus generalizing and extending several previously proposed definitions of provenance and missing query result explanations. Starting from the established definition of actual causes by Halpern and Pearl [12], we propose functional causes as a refined definition of causality with several desirable properties. These properties allow us to apply our notion of causality in a database context and apply it uniformly to define the causes of query results and their individual contributions in several ways: (i) we can model both provenance as well as nonanswers, (ii) we can define explanations as either data in the input relations or relational operations in a query plan, and (iii) we can give graded degrees of responsibility to individual causes, thus allowing us to rank causes. In particular, our approach allows us to explain contributions to relational aggregate functions and to rank causes according to their respective responsibilities, aiding users in identifying errors in uncertain or untrusted data. Throughout the paper, we illustrate the applicability of our framework with several examples. This is the first work that treats “positive ” and “negative” provenance under the same framework, and establishes the theoretical foundations of causality theory in a database context.
Efficient Automatic STE Refinement Using Responsibility
"... Abstract. Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on 3valued symbolic simulation, using 0,1, and X (“unknown”). X is used to abstract away values of circuit nodes, thus reducing memory and runtime of STE runs. The abstraction is derived ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on 3valued symbolic simulation, using 0,1, and X (“unknown”). X is used to abstract away values of circuit nodes, thus reducing memory and runtime of STE runs. The abstraction is derived from a given user specification. An STE run results in “pass ” (1), if the circuit satisfies the specification, “fail ” (0) if the circuit falsifies it, and “unknown ” (X), if the abstraction is too coarse to determine either of the two. In the latter case, refinement is needed: The X values of some of the abstracted inputs should be replaced. The main difficulty is to choose an appropriate subset of these inputs that will help to eliminate the “unknown” STE result, while avoiding an unnecessary increase in memory and runtime. The common approach to this problem is to manually choose these inputs. This work suggests a novel approach to automatic refinement for STE, which is based on the notion of responsibility. For each input with X value we compute its Degree of Responsibility (DoR) to the “unknown ” STE result. We then refine those inputs whose DoR is maximal. We implemented an efficient algorithm, which is linear in the size of the circuit, for computing the approximate DoR of inputs. We used it for refinements for STE on several circuits and specifications. Our experimental results show that DoR is a very useful device for choosing inputs for refinement. In comparison with previous works on automatic refinement, our computation of the refinement set is faster, STE needs fewer refinement iterations and uses less overall memory and time. 1