Results 1  10
of
23
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
Instrumental variables and inverse probability weighting for causal inference from longitudinal observational studies
, 2004
"... ..."
Statistical Models for Causation: What Inferential Leverage Do They Provide
 Evaluation Review
, 2006
"... The online version of this article can be found at: ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
The online version of this article can be found at:
Semiparametric estimation of treatment effect in a pretestposttest study with missing data (with Discussion
 Statistical Science
"... Abstract. The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified followup time (posttest). Interest focuses on ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Abstract. The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified followup time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and followup response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing followup, does not exist. Under a semiparametric perspective on the pretest–posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates. Key words and phrases: Analysis of covariance, covariate adjustment, influence function, inverse probability weighting, missing at random.
Ignorable Common Information, Null Sets and Basu’s First Theorem
"... This paper deals with the Intersection Property, or Basu’s First Theorem, which is valid under a condition of no common information, also known as measurable separability. After formalizing this notion, the paper reviews general properties and give operational characterizations in two topical cases: ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper deals with the Intersection Property, or Basu’s First Theorem, which is valid under a condition of no common information, also known as measurable separability. After formalizing this notion, the paper reviews general properties and give operational characterizations in two topical cases: the finite one and the multivariate normal one. The paper concludes discussing the relevance of these characterizations for different fields as graphical models, zero entries in contingency tables, causal analysis and estimability in Markov processes.
Confounding Equivalence in Causal Inference
 PROCEEDINGS OF UAI, 433441. AUAI, CORVALLIS, OR, 2010.
, 2010
"... The paper provides a simple test for deciding, from a given causal diagram, whether two sets of variables have the same biasreducing potential under adjustment. The test requires that one of the following two conditions holds: either (1) both sets are admissible (i.e., satisfy the backdoor criteri ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The paper provides a simple test for deciding, from a given causal diagram, whether two sets of variables have the same biasreducing potential under adjustment. The test requires that one of the following two conditions holds: either (1) both sets are admissible (i.e., satisfy the backdoor criterion) or (2) the Markov boundaries surrounding the manipulated variable(s) are identical in both sets. Applications to covariate selection and model testing are discussed.
Confounding Equivalence in Observational Studies (or, when are two measurements equally valuable for effect estimation?)
, 2009
"... ..."
Statistical Models for Causation
, 2005
"... We review the basis for inferring causation by statistical modeling. Parameters should be stable under interventions, and so should error distributions. There are also statistical conditions on the errors. Stability is difficult to establish a priori, and the statistical conditions are equally probl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We review the basis for inferring causation by statistical modeling. Parameters should be stable under interventions, and so should error distributions. There are also statistical conditions on the errors. Stability is difficult to establish a priori, and the statistical conditions are equally problematic. Therefore, causal relationships are seldom to be inferred from a data set by running statistical algorithms, unless there is substantial prior knowledge about the mechanisms that generated the data. We begin with linear models (regression analysis) and then turn to graphical models, which may in principle be nonlinear.
International Econometric Review (IER) 5 Limits of Econometrics
"... It is an article of faith in much applied work that disturbance terms are IID—Independent and Identically Distributed—across observations. Sometimes, this assumption is replaced by other assumptions that are more complicated but equally artificial. For example, when observations ..."
Abstract
 Add to MetaCart
It is an article of faith in much applied work that disturbance terms are IID—Independent and Identically Distributed—across observations. Sometimes, this assumption is replaced by other assumptions that are more complicated but equally artificial. For example, when observations