Results 1 
6 of
6
The Complexity of Causality and Responsibility for Query Answers and nonAnswers
"... An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consi ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consists, in general, of only a fragment of the lineage. In this paper, we adapt Halpern, Pearl, and Chockler’s recent definitions of causality and responsibility to define the causes of answers and nonanswers to queries, and their degree of responsibility. Responsibility captures the notion of degree of causality and serves to rank potentially many causes by their relative contributions to the effect. Then, we study the complexity of computing causes and responsibilities for conjunctive queries. It is known that computing causes is NPcomplete in general. Our first main result shows that all causes to conjunctive queries can be computed by a relational query which may involve negation. Thus, causality can be computed in PTIME, and very efficiently so. Next, we study computing responsibility. Here, we prove that the complexity depends on the conjunctive query and demonstrate a dichotomy between PTIME and NPcomplete cases. For the PTIME cases, we give a nontrivial algorithm, consisting of a reduction to the maxflow computation problem. Finally, we prove that, even when it is in PTIME, responsibility is complete for LOGSPACE, implying that, unlike causality, it cannot be computed by a relational query. 1.
Explaining Counterexamples Using Causality
"... Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification i ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a nontrivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality, introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomialtime algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a lightweight external layer to any model checking tool, or used to explain simulation traces. 1
Why so? or why no? functional causality for explaining query answers
 CORR
, 2009
"... In this paper, we propose causality as a unified framework to explain query answers and nonanswers, thus generalizing and extending several previously proposed definitions of provenance and missing query result explanations. Starting from the established definition of actual causes by Halpern and ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper, we propose causality as a unified framework to explain query answers and nonanswers, thus generalizing and extending several previously proposed definitions of provenance and missing query result explanations. Starting from the established definition of actual causes by Halpern and Pearl [12], we propose functional causes as a refined definition of causality with several desirable properties. These properties allow us to apply our notion of causality in a database context and apply it uniformly to define the causes of query results and their individual contributions in several ways: (i) we can model both provenance as well as nonanswers, (ii) we can define explanations as either data in the input relations or relational operations in a query plan, and (iii) we can give graded degrees of responsibility to individual causes, thus allowing us to rank causes. In particular, our approach allows us to explain contributions to relational aggregate functions and to rank causes according to their respective responsibilities, aiding users in identifying errors in uncertain or untrusted data. Throughout the paper, we illustrate the applicability of our framework with several examples. This is the first work that treats “positive ” and “negative” provenance under the same framework, and establishes the theoretical foundations of causality theory in a database context.
Managing complexity through abstraction: A refinementbased approach to formalize instruction set architectures
 Newcastle University
, 2011
"... Abstract. Verifying the functional correctness of a processor requires a sound and complete specification of its Instruction Set Architecture (ISA). Current industrial practice is to describe a processor’s ISA informally using natural language often with added semiformal notation to capture the fun ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Verifying the functional correctness of a processor requires a sound and complete specification of its Instruction Set Architecture (ISA). Current industrial practice is to describe a processor’s ISA informally using natural language often with added semiformal notation to capture the functional intent of the instructions. This leaves scope for errors and inconsistencies. In this paper we present a method to specify, design and construct sound and complete ISAs by stepwise refinement and formal proof using the formal method EventB. We discuss how the automatically generated Proof Obligations help to ensure selfconsistency of the formal ISA model, and how desirable properties of ISAs can be enforced within this modeling framework. We have developed a generic ISA modeling template in EventB to facilitate reuse. The key value of reusing such a template is increased model integrity. Our method is now being used to formalize the ISA of the XMOS XCore processor with the aim to guarantee that the documentation of the XCore matches the silicon and the silicon matches the architectural intent. 1
Efficient Automatic STE Refinement Using Responsibility
"... Abstract. Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on 3valued symbolic simulation, using 0,1, and X (“unknown”). X is used to abstract away values of circuit nodes, thus reducing memory and runtime of STE runs. The abstraction is derived ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on 3valued symbolic simulation, using 0,1, and X (“unknown”). X is used to abstract away values of circuit nodes, thus reducing memory and runtime of STE runs. The abstraction is derived from a given user specification. An STE run results in “pass ” (1), if the circuit satisfies the specification, “fail ” (0) if the circuit falsifies it, and “unknown ” (X), if the abstraction is too coarse to determine either of the two. In the latter case, refinement is needed: The X values of some of the abstracted inputs should be replaced. The main difficulty is to choose an appropriate subset of these inputs that will help to eliminate the “unknown” STE result, while avoiding an unnecessary increase in memory and runtime. The common approach to this problem is to manually choose these inputs. This work suggests a novel approach to automatic refinement for STE, which is based on the notion of responsibility. For each input with X value we compute its Degree of Responsibility (DoR) to the “unknown ” STE result. We then refine those inputs whose DoR is maximal. We implemented an efficient algorithm, which is linear in the size of the circuit, for computing the approximate DoR of inputs. We used it for refinements for STE on several circuits and specifications. Our experimental results show that DoR is a very useful device for choosing inputs for refinement. In comparison with previous works on automatic refinement, our computation of the refinement set is faster, STE needs fewer refinement iterations and uses less overall memory and time. 1
Formal Methods Section
"... The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. ..."
Abstract
 Add to MetaCart
The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountabilityrelated properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utilitytheoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model. In our use of this framework, we define notions of “open” and “closed ” systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.