Results 1  10
of
47
The Propensity Score with Continuous Treatments
 APPLIED BAYESIAN MODELING AND CAUSAL INFERENCE FROM INCOMPLETEDATA PERSPECTIVES
, 2004
"... ..."
Identification, inference, and sensitivity analysis for causal mediation effects
, 2008
"... Abstract. Causal mediation analysis is routinely conducted by applied researchers in a variety of disciplines. The goal of such an analysis is to investigate alternative causal mechanisms by examining the roles of intermediate variables that lie in the causal paths between the treatment and outcome ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Causal mediation analysis is routinely conducted by applied researchers in a variety of disciplines. The goal of such an analysis is to investigate alternative causal mechanisms by examining the roles of intermediate variables that lie in the causal paths between the treatment and outcome variables. In this paper we first prove that under a particular version of sequential ignorability assumption, the average causal mediation effect (ACME) is nonparametrically identified. We compare our identification assumption with those proposed in the literature. Some practical implications of our identification result are also discussed. In particular, the popular estimator based on the linear structural equation model (LSEM) can be interpreted as an ACME estimator once additional parametric assumptions are made. We show that these assumptions can easily be relaxed within and outside of the LSEM framework and propose simple nonparametric estimation strategies. Second, and perhaps most importantly, we propose a new sensitivity analysis that can be easily implemented by applied researchers within the LSEM framework. Like the existing identifying assumptions, the proposed sequential ignorability assumption may be too strong in many applied settings. Thus, sensitivity analysis is essential in order to examine the robustness of empirical findings to the possible existence of an unmeasured confounder. Finally, we apply the proposed methods to a randomized experiment from political psychology. We also make easytouse software available to implement the proposed methods. Key words and phrases: Causal inference, causal mediation analysis, direct and indirect effects, linear structural equation models, sequential ignorability, unmeasured confounders. 1.
Estimation of Causal Effects Using Propensity Score Weighting: An Application to Data
 on Right Heart Catheterization,” Health Services and Outcomes Research Methodology
, 2001
"... Abstract. We consider methods for estimating causal effects of treatments when treatment assignment is unconfounded with outcomes conditional on a possibly large set of covariates. Robins and Rotnitzky (1995) suggested combining regression adjustment with weighting based on the propensity score (Ros ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
Abstract. We consider methods for estimating causal effects of treatments when treatment assignment is unconfounded with outcomes conditional on a possibly large set of covariates. Robins and Rotnitzky (1995) suggested combining regression adjustment with weighting based on the propensity score (Rosenbaum and Rubin, 1983). We adopt this approach, allowing for a flexible specification of both the propensity score and the regression function. We apply these methods to data on the effects of right heart catheterization (RHC) studied in Connors et al (1996), and we find that our estimator gives stable estimates over a wide range of values for the two parameters governing the selection of variables.
Optimal Structural Nested Models for Optimal Sequential Decisions
 In Proceedings of the Second Seattle Symposium on Biostatistics
, 2004
"... ABSTRACT: I describe two new methods for estimating the optimal treatment regime (equivalently, protocol, plan or strategy) from very high dimesional observational and experimental data: (i) gestimation of an optimal doubleregime structural nested mean model (drSNMM) and (ii) gestimation of a sta ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
ABSTRACT: I describe two new methods for estimating the optimal treatment regime (equivalently, protocol, plan or strategy) from very high dimesional observational and experimental data: (i) gestimation of an optimal doubleregime structural nested mean model (drSNMM) and (ii) gestimation of a standard single regime SNMM combined with sequential dynamicprogramming (DP) regression. These methods are compared to certain regression methods found in the sequential decision and reinforcement learning literatures and to the regret modelling methods of Murphy (2003). I consider both Bayesian and frequentist inference. In particular, I propose a novel “Bayesfrequentist compromise ” that combines honest subjective non or semiparametric Bayesian inference with good frequentist behavior, even in cases where the model is so large and the likelihood function so complex that standard (uncompromised) Bayes procedures have poor frequentist performance. 1
When can history be our guide? The pitfalls of counterfactual inference
 International Studies Quarterly
, 2007
"... Inferences about counterfactuals are essential for prediction, answering ‘‘what if ’ ’ questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from wellspecified statistical analyses become based on speculation and conve ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
Inferences about counterfactuals are essential for prediction, answering ‘‘what if ’ ’ questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from wellspecified statistical analyses become based on speculation and convenient but indefensible model assumptions rather than empirical evidence. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of modeldependence, so this problem can be hard to detect. We develop easytoapply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. We use these methods to evaluate the extensive scholarly literatures on the effects of changes in the degree of democracy in a country (on any dependent variable) and separate analyses of the effects of UN peacebuilding efforts. We find evidence that many scholars are inadvertently drawing conclusions based more on modeling hypotheses than on evidence in the data. For some research questions, history contains insufficient information to be our guide. Free software that accompanies this paper implements all our suggestions. Social science is about making inferencesFusing facts we know to learn about facts we do not know. Some inferential targets (the facts we do not know) are factual, which means that they exist even if we do not know them. In early 2003, Saddam Hussein was obviously either alive or dead, but the world did not know which it was
The dangers of extreme counterfactuals
 Political Analysis
, 2006
"... We address the problem that occurs when inferences about counterfactuals—predictions, ‘‘whatif’ ’ questions, and causal effects—are attempted far from the available data. The danger of these extreme counterfactuals is that substantive conclusions drawn from statistical models that fit the data well ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
(Show Context)
We address the problem that occurs when inferences about counterfactuals—predictions, ‘‘whatif’ ’ questions, and causal effects—are attempted far from the available data. The danger of these extreme counterfactuals is that substantive conclusions drawn from statistical models that fit the data well turn out to be based largely on speculation hidden in convenient modeling assumptions that few would be willing to defend. Yet existing statistical strategies provide few reliable means of identifying extreme counterfactuals. We offer a proof that inferences farther from the data allow more model dependence and then develop easytoapply methods to evaluate how model dependent our answers would be to specified counterfactuals. These methods require neither sensitivity testing over specified classes of models nor evaluating any specific modeling assumptions. If an analysis fails the simple tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. Free software that accompanies this article implements all the methods developed. 1
Choice as an alternative to control in observational studies
 Statistical Science
, 1999
"... Abstract. In a randomized experiment, the investigator creates a clear and relatively unambiguous comparison of treatment groups by exerting tight control over the assignment of treatments to experimental subjects, ensuring that comparable subjects receive alternative treatments. In an observational ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. In a randomized experiment, the investigator creates a clear and relatively unambiguous comparison of treatment groups by exerting tight control over the assignment of treatments to experimental subjects, ensuring that comparable subjects receive alternative treatments. In an observational study, the investigator lacks control of treatment assignments and must seek a clear comparison in other ways. Care in the choice of circumstances in which the study is conducted can greatly influence the quality of the evidence about treatment effects. This is illustrated in detail using three observational studies that use choice effectively, one each from economics, clinical psychology and epidemiology. Other studies are discussed more briefly to illustrate specific points. The design choices include (i) the choice of research hypothesis, (ii) the choice of treated and control groups, (iii) the explicit use of competing theories, rather than merely null and alternative hypotheses, (iv) the use of internal replication in the form of multiple manipulations of a single dose of treatment, (v) the use of undelivered doses in control groups, (vi) design choices to minimize the need for stability analyses, (vii) the duration of treatment and (viii) the use of natural blocks. Key words and phrases: Causal effects, control groups, internal replication, observational studies, sensitivity analysis, stability analysis, treatment effects, undelivered doses.
Marginal and nested structural models using instrumental variables
 Journal of the American Statistical Association 2010
"... Abstract. The objective of many scientific studies is to evaluate the effect of a treatment on an outcome of interest ceteris paribus. Instrumental variables (IVs) serve as an experimental handle, independent of potential outcomes and potential treatment status and affecting potential outcomes only ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The objective of many scientific studies is to evaluate the effect of a treatment on an outcome of interest ceteris paribus. Instrumental variables (IVs) serve as an experimental handle, independent of potential outcomes and potential treatment status and affecting potential outcomes only through potential treatment status. We propose marginal and nested structural models using IVs, in the spirit of marginal and nested structural models under no unmeasured confounding. A marginal structural IV model parameterizes the expectations of two potential outcomes under an active treatment and the null treatment respectively, for those in a covariatespecific subpopulation who would take the active treatment if the instrument were externally set to each specific level. A nested structural IV model parameterizes the difference between the two expectations after transformed by a link function and hence the average treatment effect on the treated at each instrument level. We develop IV outcome regression, IV propensity score weighting, and doubly robust methods for estimation, in parallel to those for structural models under no unmeasured confounding. The regression method requires correctly specified models for the treatment propensity score and the outcome regression function. The weighting method requires a correctly specified model for the instrument propensity score. The doubly robust estimators depend on the two sets of models and remain consistent if either set of models are correctly specified. We apply our methods to study returns to education using data from the National Longitudinal Survey of Young Men. Key words and phrases. Causal inference; Double robustness; Generalized method
Adjusting for TimeVarying Confounding in Survival Analysis: A Technical Report." Population Studies Center Research Report 04
, 2004
"... ..."
(Show Context)