Results 1  10
of
50
Causal inference in statistics: An Overview
, 2009
"... This review presents empirical researcherswith recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all ca ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
(Show Context)
This review presents empirical researcherswith recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects ” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret, ” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”). Finally, the paper defines the formal and conceptual relationships between the structural and potentialoutcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.
Effects of treatment on the treated: Identification and generalization
 In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence
, 2009
"... Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called “effect of treatment on the treated, ” (ETT) have been used to to evaluate educational programs, critic publ ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
(Show Context)
Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called “effect of treatment on the treated, ” (ETT) have been used to to evaluate educational programs, critic public policies, and justify individual decision making. In this paper we explore the conditions under which ETT can be estimated from (i.e., identified in) experimental and/or observational studies. We show that, when the action invokes a singleton variable, the conditions for ETT identification have simple characterizations in terms of causal diagrams. We further give a graphical characterization of the conditions under which the effects of multiple treatments on the treated can be identified, as well as ways in which the ETT estimand can be constructed from both interventional and observational distributions. 1
Transportability of causal and statistical relations: A formal approach
 In Proceedings of the TwentyFifth National Conference on Artificial Intelligence. AAAI Press, Menlo Park, CA
, 2011
"... We address the problem of transferring information learned from experiments to a different environment, in which only passive observations can be collected. We introduce a formal representation called “selection diagrams ” for expressing knowledge about differences and commonalities between environm ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
(Show Context)
We address the problem of transferring information learned from experiments to a different environment, in which only passive observations can be collected. We introduce a formal representation called “selection diagrams ” for expressing knowledge about differences and commonalities between environments and, using this representation, we derive procedures for deciding whether effects in the target environment can be inferred from experiments conducted elsewhere. When the answer is affirmative, the procedures identify the set of experiments and observations that need be conducted to license the transport. We further discuss how transportability analysis can guide the transfer of knowledge in nonexperimental learning to minimize remeasurement cost and improve prediction power.
Dormant independence
 In Proceedings of the TwentyThird Conference on Artificial Intelligence
, 2008
"... The construction of causal graphs from nonexperimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. Whil ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
The construction of causal graphs from nonexperimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. While conditional independencies are well studied and frequently used in causal induction algorithms, Verma constraints are still poorly understood, and rarely applied. In this paper we examine a special subset of Verma constraints which are easy to understand, easy to identify and easy to apply; they arise from “dormant independencies, ” namely, conditional independencies that hold in interventional distributions. We give a complete algorithm for determining if a dormant independence between two sets of variables is entailed by the causal graph, such that this independence is identifiable, in other words if it resides in an interventional distribution that can be predicted without resorting to interventions. We further show the usefulness of dormant independencies in model testing and induction by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph.
What counterfactuals can be tested
 In Proceedings of the TwentyThird Conference on Uncertainty in Artificial Intelligence
, 2007
"... Counterfactual statements, e.g., ”my headache would be gone had I taken an aspirin ” are central to scientific discourse, and are formally interpreted as statements derived from ”alternative worlds”. However, since they invoke hypothetical states of affairs, often incompatible with what is actual ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
Counterfactual statements, e.g., ”my headache would be gone had I taken an aspirin ” are central to scientific discourse, and are formally interpreted as statements derived from ”alternative worlds”. However, since they invoke hypothetical states of affairs, often incompatible with what is actually known or observed, testing counterfactuals is fraught with conceptual and practical difficulties. In this paper, we provide a complete characterization of ”testable counterfactuals, ” namely, counterfactual statements whose probabilities can be inferred from physical experiments. We provide complete procedures for discerning whether a given counterfactual is testable and, if so, expressing its probability in terms of experimental data. 1
Complete Identification Methods for the Causal Hierarchy
"... We consider a hierarchy of queries about causal relationships in graphical models, where each level in the hierarchy requires more detailed information than the one below. The hierarchy consists of three levels: associative relationships, derived from a joint distribution over the observable variabl ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We consider a hierarchy of queries about causal relationships in graphical models, where each level in the hierarchy requires more detailed information than the one below. The hierarchy consists of three levels: associative relationships, derived from a joint distribution over the observable variables; causeeffect relationships, derived from distributions resulting from external interventions; and counterfactuals, derived from distributions that span multiple “parallel worlds ” and resulting from simultaneous, possibly conflicting observations and interventions. We completely characterize cases where a given causal query can be computed from information lower in the hierarchy, and provide algorithms that accomplish this computation. Specifically, we show when effects of interventions can be computed from observational studies, and when probabilities of counterfactuals can be computed from experimental studies. We also provide a graphical characterization of those queries which cannot be computed (by any method) from queries at a lower layer of the hierarchy.
Transportability of Causal Effects: Completeness Results
, 2012
"... The study of transportability aims to identify conditions under which causal information learned from experiments can be reused in a different environment where only passive observations can be collected. The theory introduced in [Pearl and Bareinboim, 2011] (henceforth [PB, 2011]) defines formal co ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
The study of transportability aims to identify conditions under which causal information learned from experiments can be reused in a different environment where only passive observations can be collected. The theory introduced in [Pearl and Bareinboim, 2011] (henceforth [PB, 2011]) defines formal conditions for such transfer but falls short of providing an effective procedure for deciding whether transportability is feasible for a given set of assumptions about differences between the source and target domains. This paper provides such procedure. It establishes a necessary and sufficient condition for deciding when causal effects in the target domain are estimable from both the statistical information available and the causal information transferred from the experiments. The paper further provides a complete algorithm for computing the transport formula, that is, a way of fusing experimental and observational information to synthesize an estimate of the desired causal relation.
On the validity of covariate adjustment for estimating causal effects
 In Proceedings of the 26th Conference on Uncertainty and Artificial Intelligence, Eds. P. Grunwald & P. Spirtes
, 2010
"... Identifying effects of actions (treatments) on outcome variables from observational data and causal assumptions is a fundamental problem in causal inference. This identification is made difficult by the presence of confounders which can be related to both treatment and outcome variables. Confounders ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
Identifying effects of actions (treatments) on outcome variables from observational data and causal assumptions is a fundamental problem in causal inference. This identification is made difficult by the presence of confounders which can be related to both treatment and outcome variables. Confounders are often handled, both in theory and in practice, by adjusting for covariates, in other words considering outcomes conditioned on treatment and covariate values, weighed by probability of observing those covariate values. In this paper, we give a complete graphical criterion for covariate adjustment, which we term the adjustment criterion, and derive some interesting corollaries of the completeness of this criterion. 1