Results 1  10
of
38
Causal inference in statistics: An Overview
, 2009
"... This review presents empirical researcherswith recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all ca ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
This review presents empirical researcherswith recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects ” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret, ” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”). Finally, the paper defines the formal and conceptual relationships between the structural and potentialoutcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.
2006): “Identifiability in causal bayesian networks: A sound and complete algorithm
 in Proceedings of the TwentyFirst National Conference on Artificial Intelligence (AAAI 2006), Menlo Park, CA
"... This paper addresses the problem of identifying causal effects from nonexperimental data in a causal Bayesian network, i.e., a directed acyclic graph that represents causal relationships. The identifiability question asks whether it is possible to compute the probability of some set of (effect) vari ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This paper addresses the problem of identifying causal effects from nonexperimental data in a causal Bayesian network, i.e., a directed acyclic graph that represents causal relationships. The identifiability question asks whether it is possible to compute the probability of some set of (effect) variables given intervention on another set of (intervention) variables, in the presence of nonobservable (i.e., hidden or latent) variables. It is well known that the answer to the question depends on the structure of the causal Bayesian network, the set of observable variables, the set of effect variables, and the set of intervention variables. Our work is based on the work of Tian, Pearl, Huang, and Valtorta (Tian & Pearl 2002a; 2002b; 2003; Huang & Valtorta 2006a) and extends it. We show that the identify algorithm that Tian and Pearl define and prove sound for semiMarkovian models can be transfered to general causal graphs and is not only sound, but also complete. This result effectively solves the identifiability question for causal Bayesian networks that Pearl posed in 1995 (Pearl 1995), by providing a sound and complete algorithm for identifiability.
Dormant independence
 In Proceedings of the TwentyThird Conference on Artificial Intelligence
, 2008
"... The construction of causal graphs from nonexperimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. Whil ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
The construction of causal graphs from nonexperimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. While conditional independencies are well studied and frequently used in causal induction algorithms, Verma constraints are still poorly understood, and rarely applied. In this paper we examine a special subset of Verma constraints which are easy to understand, easy to identify and easy to apply; they arise from “dormant independencies, ” namely, conditional independencies that hold in interventional distributions. We give a complete algorithm for determining if a dormant independence between two sets of variables is entailed by the causal graph, such that this independence is identifiable, in other words if it resides in an interventional distribution that can be predicted without resorting to interventions. We further show the usefulness of dormant independencies in model testing and induction by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph.
Transportability of causal and statistical relations: A formal approach
 In Proceedings of the TwentyFifth National Conference on Artificial Intelligence. AAAI Press, Menlo Park, CA
, 2011
"... We address the problem of transferring information learned from experiments to a different environment, in which only passive observations can be collected. We introduce a formal representation called “selection diagrams ” for expressing knowledge about differences and commonalities between environm ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
We address the problem of transferring information learned from experiments to a different environment, in which only passive observations can be collected. We introduce a formal representation called “selection diagrams ” for expressing knowledge about differences and commonalities between environments and, using this representation, we derive procedures for deciding whether effects in the target environment can be inferred from experiments conducted elsewhere. When the answer is affirmative, the procedures identify the set of experiments and observations that need be conducted to license the transport. We further discuss how transportability analysis can guide the transfer of knowledge in nonexperimental learning to minimize remeasurement cost and improve prediction power.
Effects of treatment on the treated: Identification and generalization
 In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence
, 2009
"... Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called “effect of treatment on the treated, ” (ETT) have been used to to evaluate educational programs, critic publ ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called “effect of treatment on the treated, ” (ETT) have been used to to evaluate educational programs, critic public policies, and justify individual decision making. In this paper we explore the conditions under which ETT can be estimated from (i.e., identified in) experimental and/or observational studies. We show that, when the action invokes a singleton variable, the conditions for ETT identification have simple characterizations in terms of causal diagrams. We further give a graphical characterization of the conditions under which the effects of multiple treatments on the treated can be identified, as well as ways in which the ETT estimand can be constructed from both interventional and observational distributions. 1
Transportability across studies: A formal approach
, 2010
"... We provide a formal definition of the notion of “transportability, ” or “external validity, ” which we view as a license to transfer causal information learned in experimental studies to a different environment, in which only observational studies can be conducted. We introduce a formal representati ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We provide a formal definition of the notion of “transportability, ” or “external validity, ” which we view as a license to transfer causal information learned in experimental studies to a different environment, in which only observational studies can be conducted. We introduce a formal representation called “selection diagrams ” for expressing knowledge about differences and commonalities between populations of interest and, using this representation, we derive procedures for deciding whether causal effects in the target environment can be inferred from experimental findings in a different environment. When the answer is affirmative, the procedures identify the set of experimental and observational studies that need be conducted to license the transport. We further demonstrate how transportability analysis can guide the transfer of knowledge among nonexperimental studies to minimize remeasurement cost and improve prediction power. We further provide a causally principled definition of “surrogate endpoint ” and show that the theory of transportability can assist the identification of valid surrogates in a complex network of causeeffect relationships. 1 Introduction: Threats
Causal reasoning with ancestral graphs
, 2008
"... Causal reasoning is primarily concerned with what would happen to a system under external interventions. In particular, we are often interested in predicting the probability distribution of some random variables that would result if some other variables were forced to take certain values. One promin ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Causal reasoning is primarily concerned with what would happen to a system under external interventions. In particular, we are often interested in predicting the probability distribution of some random variables that would result if some other variables were forced to take certain values. One prominent approach to tackling this problem is based on causal Bayesian networks, using directed acyclic graphs as causal diagrams to relate postintervention probabilities to preintervention probabilities that are estimable from observational data. However, such causal diagrams are seldom fully testable given observational data. In consequence, many causal discovery algorithms based on datamining can only output an equivalence class of causal diagrams (rather than a single one). This paper is concerned with causal reasoning given an equivalence class of causal diagrams, represented by a (partial) ancestral graph. We present two main results. The first result extends Pearl (1995)’s celebrated docalculus to the context of ancestral graphs. In the second result, we focus on a key component of Pearl’s calculus—the property of invariance under interventions, and give stronger graphical conditions for this property than those implied by the first result. The second result also improves the earlier, similar results due to Spirtes et al. (1993).
Complete Identification Methods for the Causal Hierarchy
"... We consider a hierarchy of queries about causal relationships in graphical models, where each level in the hierarchy requires more detailed information than the one below. The hierarchy consists of three levels: associative relationships, derived from a joint distribution over the observable variabl ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider a hierarchy of queries about causal relationships in graphical models, where each level in the hierarchy requires more detailed information than the one below. The hierarchy consists of three levels: associative relationships, derived from a joint distribution over the observable variables; causeeffect relationships, derived from distributions resulting from external interventions; and counterfactuals, derived from distributions that span multiple “parallel worlds ” and resulting from simultaneous, possibly conflicting observations and interventions. We completely characterize cases where a given causal query can be computed from information lower in the hierarchy, and provide algorithms that accomplish this computation. Specifically, we show when effects of interventions can be computed from observational studies, and when probabilities of counterfactuals can be computed from experimental studies. We also provide a graphical characterization of those queries which cannot be computed (by any method) from queries at a lower layer of the hierarchy.