Results 1  10
of
32
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
Statistical Models for Causation: What Inferential Leverage Do They Provide?” Evaluation Review, 30, 691–713. http://www.stat.berkeley.edu/users/census/oxcauser.pdf
 2008a). “Diagnostics Cannot Have Much Power Against General Alternatives.” http://www.stat.berkeley.edu/users/census/notest.pdf Freedman, D. A. (2008b). “Randomization Does Not Justify Logistic Regression.” http://www.stat.berkeley.edu/users/census/neylog
, 2006
"... Experiments offer more reliable evidence on causation than observational studies, which is not to gainsay the contribution to knowledge from observation. Experiments should be analyzed as experiments, not as observational studies. A simple comparison of rates might be just the right tool, with littl ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Experiments offer more reliable evidence on causation than observational studies, which is not to gainsay the contribution to knowledge from observation. Experiments should be analyzed as experiments, not as observational studies. A simple comparison of rates might be just the right tool, with little value added by “sophisticated” models. This article discusses current models for causation, as applied to experimental and observational data. The intentiontotreat principle and the effect of treatment on the treated will also be discussed. Flaws in perprotocol and treatmentreceived estimates will be demonstrated.
Randomization does not justify logistic regression
 ADVANCES IN APPLIED MATHEMATICS
, 2008
"... Logit models are often used to analyze experimental data. However, randomization does not justify the model, and estimators may be inconsistent. Here, Neyman’s nonparametric setup is used as a benchmark. Each subject has two potential responses, one if treated and the other if untreated; only one o ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Logit models are often used to analyze experimental data. However, randomization does not justify the model, and estimators may be inconsistent. Here, Neyman’s nonparametric setup is used as a benchmark. Each subject has two potential responses, one if treated and the other if untreated; only one of the two responses is observed. A consistent estimator is proposed for use with the logit model. There is a brief literature review, and some recommendations for practice.
Attention felons: Evaluating Project Safe Neighborhoods in Chicago
 Journal of Empirical Legal Studies
, 2007
"... This research uses a quasiexperimental design to evaluate the impact of Project Safe Neighborhood (PSN) initiatives on neighborhoodlevel crime rates in Chicago. Four interventions are analyzed: (1) increased federal prosecutions for convicted felons carrying or using guns, (2) the length of senten ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This research uses a quasiexperimental design to evaluate the impact of Project Safe Neighborhood (PSN) initiatives on neighborhoodlevel crime rates in Chicago. Four interventions are analyzed: (1) increased federal prosecutions for convicted felons carrying or using guns, (2) the length of sentences associated with federal prosecutions, (3) supplyside firearm policing activities, and (4) social marketing of deterrence and social norms messages through justicestyle offender notification meetings. Using individual growth curve models and propensity scores to adjust for nonrandom group assignment of neighborhoods, our findings suggest that several PSN interventions are associated with greater declines of homicide in the treatment neighborhoods compared to the control neighborhoods. The largest effect is associated with the offender notification meetings that stress individual deterrence, normative change in offender behavior, and increasing views on legitimacy and procedural justice. Possible competing hypotheses and directions for individuallevel analysis are also discussed. Driving down Interstate I90, Julien passed a billboard just before Exit 14B that read: “Stop Bringing Guns to Chicago or Go Directly to Jail. ” Julien had seen the sign before. In fact, it startled him enough to change his normal routine. Typically, Julien took a Greyhound bus when transporting the
2010a Statistical inference after model selection
 Journal of Quantitative Criminology
"... Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence in ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence intervals computed for a “final ” model. In this paper, we examine such practices and show how they are typically misguided. The parameters being estimated are no longer well defined, and postmodelselection sampling distributions are mixtures
2005), “New Claims about Executions and General Deterrence: Déjà Vu All Over Again
 Journal of Empirical Legal Studies
"... A number of papers have recently appeared claiming to show that in the United States executions deter serious crime. There are many statistical problems with the data analyses reported. This paper addresses the problem of “influence, ” which occurs when a very small and atypical fraction of the data ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A number of papers have recently appeared claiming to show that in the United States executions deter serious crime. There are many statistical problems with the data analyses reported. This paper addresses the problem of “influence, ” which occurs when a very small and atypical fraction of the data dominate the statistical results. The number of executions by state and year is the key explanatory variable, and most states in most years execute no one. A very few states in particular years execute more than 5 individuals. Such values represent about 1 % of the available observations. Reanalyses of the existing data are presented showing that claims of deterrence are a statistical artifact of this anomalous 1%. I.
Causal Inference and the Heckman Model
"... In the social sciences, evaluating the effectiveness of a program or intervention often leads researchers to draw causal inferences from observational research designs. Bias in estimated causal effects becomes an obvious problem in such settings. This article presents the Heckman Model as an approac ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In the social sciences, evaluating the effectiveness of a program or intervention often leads researchers to draw causal inferences from observational research designs. Bias in estimated causal effects becomes an obvious problem in such settings. This article presents the Heckman Model as an approach sometimes applied to observational data for the purpose of estimating an unbiased causal effect and shows how the Heckman Model can be used to correct for the problem of selection bias. It discusses in detail the assumptions necessary before the approach can be used to make causal inferences. The Heckman Model makes assumptions about the relationship between two equations in an underlying behavioral model: a response schedule and a selection function. This article shows that the Heckman Model is particularly sensitive to the choice of variables included in the selection function. This is demonstrated empirically in the context of estimating the effect of commercial coaching programs on the SAT performance of high school students. Coaching effects for both sections of the SAT are estimated using data from the National Education Longitudinal Study of 1988. Small changes in the selection function are shown to have a big impact on estimated coaching effects under the Heckman Model.
Advice and influence: The flow of advice and the diffusion of innovation
 The XXVI International Sunbelt Social Network Converence. Vancouver, British
"... Finding the influential people in a community is key to diffusion process of technological innovations, as well as other kinds of products. The ability to recognize who are the influential members of a community is important for diffusion policy makers and managers. This information is traditionally ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Finding the influential people in a community is key to diffusion process of technological innovations, as well as other kinds of products. The ability to recognize who are the influential members of a community is important for diffusion policy makers and managers. This information is traditionally obtained through costly ethnographic studies which are not necessarily efficient. In certain endeavors the use of socioeconomic and demographic measures characteristic of those ethnographic studies is not effective, because the target population is very homogeneous. In the specific case of diffusion of advanced digital technologies in underserved communities or rural areas the challenge of economic sustainability becomes an issue and the cost of traditional methods to find who are the influential members becomes prohibitive.
Toward improved use of regression in macrocomparative analysis
 Comparative Social Research
, 2007
"... I agree with much of what Michael Shalev (2007) says in his paper, both about the limits of multiple regression and about how to improve quantitative analysis in macrocomparative research. With respect to the latter, Shalev suggests three avenues for advance: (1) improve regression through technica ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
I agree with much of what Michael Shalev (2007) says in his paper, both about the limits of multiple regression and about how to improve quantitative analysis in macrocomparative research. With respect to the latter, Shalev suggests three avenues for advance: (1) improve regression through technical refinement; (2) combine regression with case studies (triangulation); (3) turn to alternative methods of quantitative analysis such as multivariate tables and graphs or factor analysis (substitution). I want to suggest some additional ways in which the use of regression in macrocomparative analysis could be improved. None involves technical refinement. Instead, most have to do with relatively basic aspects of quantitative analysis that seem, in my view, to be commonly ignored or overlooked. LOOK AT THE DATA Shalev’s third suggested path for progress consists of using tables, graphs, and tree diagrams to examine causal hierarchy and complexity and to identify cases meriting more indepth scrutiny. This should be viewed not as (or at least not solely as) a substitute for regression but rather as a critical component of regression analysis. All of us were (I hope) taught in our first
Geographic Boundaries as Regression Discontinuities
, 2013
"... Political scientists often turn to natural experiments to draw causal inferences with observational data. Recently, the regression discontinuity design (RD) has become one popular type of natural experiment given its relatively weak assumptions. We study a special type of regression discontinuity de ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Political scientists often turn to natural experiments to draw causal inferences with observational data. Recently, the regression discontinuity design (RD) has become one popular type of natural experiment given its relatively weak assumptions. We study a special type of regression discontinuity design where the discontinuity in treatment assignment is geographic. In this design, which we call the Geographic Regression Discontinuity (GRD) design, a geographic or administrative boundary splits units into treated and control areas, and analysts make the case that the division into treated and control areas occurs in an asif random fashion. We show how this design is equivalent to a standard RD with two cutoffs, but we also clarify several methodological differences that arise in geographical contexts. We also offer a method for estimation for geographicallylocated treatment effects that can also be used to validate the identification assumptions using observable pretreatment characteristics. We illustrate our methodological framework with a reexamination of the effects of political advertisements on voter turnout during a presidential campaign, exploiting the exogenous variation in the volume of presidential ads that is created by media market boundaries.