Results 1  10
of
114
Matching as Nonparametric Preprocessing for Reducing Model Dependence
 in Parametric Causal Inference,” Political Analysis
, 2007
"... Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other ..."
Abstract

Cited by 248 (42 self)
 Add to MetaCart
(Show Context)
Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author’s favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fastgrowing methodological
Causal Diagrams For Empirical Research
"... The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subjectmatter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if ..."
Abstract

Cited by 236 (37 self)
 Add to MetaCart
The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subjectmatter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, interventions treatment effect 1 Introduction The tools introduced in this paper are aimed at helping researchers communicate qualitative assumptions about causeeffect relationships, elucidate the ramifications of such assumptions, and derive causal inferences from a combination...
Matching Methods for Causal Inference: A Review and a Look Forward
"... Abstract. When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing wellmatched samples of the origina ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
(Show Context)
Abstract. When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing wellmatched samples of the original treated and control groups, thereby reducing bias due to the covariates. Since the 1970s, work on matching methods has examined how to best choose treated and control subjects for comparison. Matching methods are gaining popularity in fields such as economics, epidemiology, medicine and political science. However, until now the literature and related advice has been scattered across disciplines. Researchers who are interested in using matching methods—or developing methods related to matching—do not have a single place to turn to learn about past and current research. This paper provides a structure for thinking about matching methods and guidance on their use, coalescing the existing research (both old and new) and providing a summary of where the literature on matching methods is now and where it should be headed. Key words and phrases: Observational study, propensity scores, subclassification, weighting.
Full matching in an observational study of coaching for the SAT
 Journal of the American Statistical Association
, 2004
"... Among matching techniques for observational studies, full matching is in principle the best, in the sense that its alignment of comparable treated and control subjects is as good as that of any alternate method, and potentially much better. This article evaluates the practical performance of full ma ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
(Show Context)
Among matching techniques for observational studies, full matching is in principle the best, in the sense that its alignment of comparable treated and control subjects is as good as that of any alternate method, and potentially much better. This article evaluates the practical performance of full matching for the first time, modifying it in order to minimize variance as well as bias and then using it to compare coached and uncoached takers of the SAT. In this new version, with restrictions on the ratio of treated subjects to controls within matched sets, full matching makes use of many more observations than does pair matching, but achieves far closer matches than does matching with k ≥ 2 controls. Prior to matching, the coached and uncoached groups are separated on the propensity score by 1.1 SDs. Full matching reduces this separation to 1 % or 2 % of an SD. In older literature comparing matching and regression, Cochran expressed doubts that any method of adjustment could substantially reduce observed bias of this magnitude. To accommodate missing data, regressionbased analyses by ETS researchers rejected a subset of the available sample that differed significantly from the subsample they analyzed. Full matching on the propensity score handles the same problem simply and without rejecting observations. In addition, it eases the detection and handling of nonconstancy of treatment effects, which the regressionbased analyses had obscured, and it makes fuller use of covariate information. It estimates a somewhat larger effect of coaching on the math score than did ETS’s methods.
Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer)
"... Psychologists increasingly recommend experimental analysis of mediation. This is a step in the right direction because mediation analyses based on nonexperimental data are likely to be biased and because experiments, in principle, provide a sound basis for causal inference. But even experiments cann ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
Psychologists increasingly recommend experimental analysis of mediation. This is a step in the right direction because mediation analyses based on nonexperimental data are likely to be biased and because experiments, in principle, provide a sound basis for causal inference. But even experiments cannot overcome certain threats to inference that arise chiefly or exclusively in the context of mediation analysis—threats that have received little attention in psychology. The authors describe 3 of these threats and suggest ways to improve the exposition and design of mediation tests. Their conclusion is that inference about mediators is far more difficult than previous research suggests and is best tackled by an experimental research program that is specifically designed to address the challenges of mediation analysis.
When can history be our guide? The pitfalls of counterfactual inference
 International Studies Quarterly
, 2007
"... Inferences about counterfactuals are essential for prediction, answering ‘‘what if ’ ’ questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from wellspecified statistical analyses become based on speculation and conve ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Inferences about counterfactuals are essential for prediction, answering ‘‘what if ’ ’ questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from wellspecified statistical analyses become based on speculation and convenient but indefensible model assumptions rather than empirical evidence. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of modeldependence, so this problem can be hard to detect. We develop easytoapply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. We use these methods to evaluate the extensive scholarly literatures on the effects of changes in the degree of democracy in a country (on any dependent variable) and separate analyses of the effects of UN peacebuilding efforts. We find evidence that many scholars are inadvertently drawing conclusions based more on modeling hypotheses than on evidence in the data. For some research questions, history contains insufficient information to be our guide. Free software that accompanies this paper implements all our suggestions. Social science is about making inferencesFusing facts we know to learn about facts we do not know. Some inferential targets (the facts we do not know) are factual, which means that they exist even if we do not know them. In early 2003, Saddam Hussein was obviously either alive or dead, but the world did not know which it was
The dangers of extreme counterfactuals
 Political Analysis
, 2006
"... We address the problem that occurs when inferences about counterfactuals—predictions, ‘‘whatif’ ’ questions, and causal effects—are attempted far from the available data. The danger of these extreme counterfactuals is that substantive conclusions drawn from statistical models that fit the data well ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We address the problem that occurs when inferences about counterfactuals—predictions, ‘‘whatif’ ’ questions, and causal effects—are attempted far from the available data. The danger of these extreme counterfactuals is that substantive conclusions drawn from statistical models that fit the data well turn out to be based largely on speculation hidden in convenient modeling assumptions that few would be willing to defend. Yet existing statistical strategies provide few reliable means of identifying extreme counterfactuals. We offer a proof that inferences farther from the data allow more model dependence and then develop easytoapply methods to evaluate how model dependent our answers would be to specified counterfactuals. These methods require neither sensitivity testing over specified classes of models nor evaluating any specific modeling assumptions. If an analysis fails the simple tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. Free software that accompanies this article implements all the methods developed. 1
Conditional Independence in Sample Selection Models
 Economics Letters
, 1997
"... working paper ..."
(Show Context)