Results 1  10
of
57
Appendum to Identification of Conditional Interventional Distributions
, 2007
"... The subject of this paper is the elucidation of effects of actions from causal assumptions represented as a directed graph, and statistical knowledge given as a probability distribution. In particular, we are interested in predicting distributions on postaction outcomes given a set of measurements. ..."
Abstract

Cited by 57 (28 self)
 Add to MetaCart
The subject of this paper is the elucidation of effects of actions from causal assumptions represented as a directed graph, and statistical knowledge given as a probability distribution. In particular, we are interested in predicting distributions on postaction outcomes given a set of measurements. We provide a necessary and sufficient graphical condition for the cases where such distributions can be uniquely computed from the available information, as well as an algorithm which performs this computation whenever the condition holds. Furthermore, we use our results to prove completeness of docalculus [Pearl, 1995] for the same identification problem, and show applications to sequential decision making. 1
On the identification of causal effects
, 2003
"... This paper deals with the problem of inferring causeeffect relationships from a combination of data and theoretical assumptions. This problem arises in diverse fields such as artificial intelligence, statistics, cognitive science, economics, and the health and social sciences. For example, investig ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
This paper deals with the problem of inferring causeeffect relationships from a combination of data and theoretical assumptions. This problem arises in diverse fields such as artificial intelligence, statistics, cognitive science, economics, and the health and social sciences. For example, investigators in the health sciences are
Identifiability in causal bayesian networks: A sound and complete algorithm
 IN PROCEEDINGS OF THE TWENTYFIRST NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI 2006), MENLO PARK, CA
, 2006
"... This paper addresses the problem of identifying causal effects from nonexperimental data in a causal Bayesian network, i.e., a directed acyclic graph that represents causal relationships. The identifiability question asks whether it is possible to compute the probability of some set of (effect) vari ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
This paper addresses the problem of identifying causal effects from nonexperimental data in a causal Bayesian network, i.e., a directed acyclic graph that represents causal relationships. The identifiability question asks whether it is possible to compute the probability of some set of (effect) variables given intervention on another set of (intervention) variables, in the presence of nonobservable (i.e., hidden or latent) variables. It is well known that the answer to the question depends on the structure of the causal Bayesian network, the set of observable variables, the set of effect variables, and the set of intervention variables. Our work is based on the work of Tian, Pearl, Huang, and Valtorta (Tian & Pearl 2002a; 2002b; 2003; Huang & Valtorta 2006a) and extends it. We show that the identify algorithm that Tian and Pearl define and prove sound for semiMarkovian models can be transfered to general causal graphs and is not only sound, but also complete. This result effectively solves the identifiability question for causal Bayesian networks that Pearl posed in 1995 (Pearl 1995), by providing a sound and complete algorithm for identifiability.
Complete Identification Methods for the Causal Hierarchy
"... We consider a hierarchy of queries about causal relationships in graphical models, where each level in the hierarchy requires more detailed information than the one below. The hierarchy consists of three levels: associative relationships, derived from a joint distribution over the observable variabl ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
We consider a hierarchy of queries about causal relationships in graphical models, where each level in the hierarchy requires more detailed information than the one below. The hierarchy consists of three levels: associative relationships, derived from a joint distribution over the observable variables; causeeffect relationships, derived from distributions resulting from external interventions; and counterfactuals, derived from distributions that span multiple “parallel worlds ” and resulting from simultaneous, possibly conflicting observations and interventions. We completely characterize cases where a given causal query can be computed from information lower in the hierarchy, and provide algorithms that accomplish this computation. Specifically, we show when effects of interventions can be computed from observational studies, and when probabilities of counterfactuals can be computed from experimental studies. We also provide a graphical characterization of those queries which cannot be computed (by any method) from queries at a lower layer of the hierarchy.
What counterfactuals can be tested
 In Proceedings of the TwentyThird Conference on Uncertainty in Artificial Intelligence
, 2007
"... Counterfactual statements, e.g., ”my headache would be gone had I taken an aspirin ” are central to scientific discourse, and are formally interpreted as statements derived from ”alternative worlds”. However, since they invoke hypothetical states of affairs, often incompatible with what is actual ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
Counterfactual statements, e.g., ”my headache would be gone had I taken an aspirin ” are central to scientific discourse, and are formally interpreted as statements derived from ”alternative worlds”. However, since they invoke hypothetical states of affairs, often incompatible with what is actually known or observed, testing counterfactuals is fraught with conceptual and practical difficulties. In this paper, we provide a complete characterization of ”testable counterfactuals, ” namely, counterfactual statements whose probabilities can be inferred from physical experiments. We provide complete procedures for discerning whether a given counterfactual is testable and, if so, expressing its probability in terms of experimental data. 1
On the validity of covariate adjustment for estimating causal effects
 In Proceedings of the 26th Conference on Uncertainty and Artificial Intelligence, Eds. P. Grunwald & P. Spirtes
, 2010
"... Identifying effects of actions (treatments) on outcome variables from observational data and causal assumptions is a fundamental problem in causal inference. This identification is made difficult by the presence of confounders which can be related to both treatment and outcome variables. Confounders ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
Identifying effects of actions (treatments) on outcome variables from observational data and causal assumptions is a fundamental problem in causal inference. This identification is made difficult by the presence of confounders which can be related to both treatment and outcome variables. Confounders are often handled, both in theory and in practice, by adjusting for covariates, in other words considering outcomes conditioned on treatment and covariate values, weighed by probability of observing those covariate values. In this paper, we give a complete graphical criterion for covariate adjustment, which we term the adjustment criterion, and derive some interesting corollaries of the completeness of this criterion. 1
Dormant independence
 In Proceedings of the TwentyThird Conference on Artificial Intelligence
, 2008
"... The construction of causal graphs from nonexperimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. Whil ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
The construction of causal graphs from nonexperimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. While conditional independencies are well studied and frequently used in causal induction algorithms, Verma constraints are still poorly understood, and rarely applied. In this paper we examine a special subset of Verma constraints which are easy to understand, easy to identify and easy to apply; they arise from “dormant independencies, ” namely, conditional independencies that hold in interventional distributions. We give a complete algorithm for determining if a dormant independence between two sets of variables is entailed by the causal graph, such that this independence is identifiable, in other words if it resides in an interventional distribution that can be predicted without resorting to interventions. We further show the usefulness of dormant independencies in model testing and induction by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph.
The docalculus revisited
, 2012
"... The docalculus was developed in 1995 to facilitate the identification of causal effects in nonparametric models. The completeness proofs of [Huang and Valtorta, 2006] and [Shpitser and Pearl, 2006] and the graphical criteria of [Tian and Shpitser, 2010] have laid this identification problem to res ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
The docalculus was developed in 1995 to facilitate the identification of causal effects in nonparametric models. The completeness proofs of [Huang and Valtorta, 2006] and [Shpitser and Pearl, 2006] and the graphical criteria of [Tian and Shpitser, 2010] have laid this identification problem to rest. Recent explorations unveil the usefulness of the docalculus in three additional areas: mediation analysis [Pearl, 2012], transportability [Pearl and Bareinboim, 2011] and metasynthesis. Metasynthesis (freshly coined) is the task of fusing empirical results from several diverse studies, conducted on heterogeneous populations and under different conditions, so as to synthesize an estimate of a causal relation in some target environment, potentially different from those under study. The talk surveys these results with emphasis on the challenges posed by metasynthesis. For background material, see
Effects of treatment on the treated: Identification and generalization
 In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence
, 2009
"... Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called “effect of treatment on the treated, ” (ETT) have been used to to evaluate educational programs, critic publ ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called “effect of treatment on the treated, ” (ETT) have been used to to evaluate educational programs, critic public policies, and justify individual decision making. In this paper we explore the conditions under which ETT can be estimated from (i.e., identified in) experimental and/or observational studies. We show that, when the action invokes a singleton variable, the conditions for ETT identification have simple characterizations in terms of causal diagrams. We further give a graphical characterization of the conditions under which the effects of multiple treatments on the treated can be identified, as well as ways in which the ETT estimand can be constructed from both interventional and observational distributions. 1