Results 1  10
of
94
Conceptualizing and testing random indirect effects and moderated mediation in multilevel models: new procedures and recommendations
 Psychological Methods
, 2006
"... The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects. Simulations show that the estimates are unbiased under most conditions. Confidence intervals based on a normal approximation or a simulated sampling distribution perform well when the random effects are normally distributed but less so when they are nonnormally distributed. These methods are further developed to address hypotheses of moderated mediation in the multilevel context. An example demonstrates the feasibility and usefulness of the proposed methods.
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
Randomization does not justify logistic regression
 ADVANCES IN APPLIED MATHEMATICS
, 2008
"... Logit models are often used to analyze experimental data. However, randomization does not justify the model, and estimators may be inconsistent. Here, Neyman’s nonparametric setup is used as a benchmark. Each subject has two potential responses, one if treated and the other if untreated; only one o ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Logit models are often used to analyze experimental data. However, randomization does not justify the model, and estimators may be inconsistent. Here, Neyman’s nonparametric setup is used as a benchmark. Each subject has two potential responses, one if treated and the other if untreated; only one of the two responses is observed. A consistent estimator is proposed for use with the logit model. There is a brief literature review, and some recommendations for practice.
Statistical Models for Causation: What Inferential Leverage Do They Provide
 Evaluation Review
, 2006
"... The online version of this article can be found at: ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
The online version of this article can be found at:
2010a Statistical inference after model selection
 Journal of Quantitative Criminology
"... Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence in ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence intervals computed for a “final ” model. In this paper, we examine such practices and show how they are typically misguided. The parameters being estimated are no longer well defined, and postmodelselection sampling distributions are mixtures
2005), “New Claims about Executions and General Deterrence: Déjà Vu All Over Again
 Journal of Empirical Legal Studies
"... A number of papers have recently appeared claiming to show that in the United States executions deter serious crime. There are many statistical problems with the data analyses reported. This paper addresses the problem of “influence, ” which occurs when a very small and atypical fraction of the data ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
A number of papers have recently appeared claiming to show that in the United States executions deter serious crime. There are many statistical problems with the data analyses reported. This paper addresses the problem of “influence, ” which occurs when a very small and atypical fraction of the data dominate the statistical results. The number of executions by state and year is the key explanatory variable, and most states in most years execute no one. A very few states in particular years execute more than 5 individuals. Such values represent about 1 % of the available observations. Reanalyses of the existing data are presented showing that claims of deterrence are a statistical artifact of this anomalous 1%. I.
Causal Inference and the Heckman Model
 Journal of Educational and Behavioral Statistics
, 2004
"... ..."
(Show Context)
Attention felons: Evaluating Project Safe Neighborhoods in Chicago
 Journal of Empirical Legal Studies
, 2007
"... This research uses a quasiexperimental design to evaluate the impact of Project Safe Neighborhood (PSN) initiatives on neighborhoodlevel crime rates in Chicago. Four interventions are analyzed: (1) increased federal prosecutions for convicted felons carrying or using guns, (2) the length of senten ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This research uses a quasiexperimental design to evaluate the impact of Project Safe Neighborhood (PSN) initiatives on neighborhoodlevel crime rates in Chicago. Four interventions are analyzed: (1) increased federal prosecutions for convicted felons carrying or using guns, (2) the length of sentences associated with federal prosecutions, (3) supplyside firearm policing activities, and (4) social marketing of deterrence and social norms messages through justicestyle offender notification meetings. Using individual growth curve models and propensity scores to adjust for nonrandom group assignment of neighborhoods, our findings suggest that several PSN interventions are associated with greater declines of homicide in the treatment neighborhoods compared to the control neighborhoods. The largest effect is associated with the offender notification meetings that stress individual deterrence, normative change in offender behavior, and increasing views on legitimacy and procedural justice. Possible competing hypotheses and directions for individuallevel analysis are also discussed. Driving down Interstate I90, Julien passed a billboard just before Exit 14B that read: “Stop Bringing Guns to Chicago or Go Directly to Jail. ” Julien had seen the sign before. In fact, it startled him enough to change his normal routine. Typically, Julien took a Greyhound bus when transporting the
Toward improved use of regression in macrocomparative analysis
 Comparative Social Research
, 2007
"... I agree with much of what Michael Shalev (2007) says in his paper, both about the limits of multiple regression and about how to improve quantitative analysis in macrocomparative research. With respect to the latter, Shalev suggests three avenues for advance: (1) improve regression through technica ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
I agree with much of what Michael Shalev (2007) says in his paper, both about the limits of multiple regression and about how to improve quantitative analysis in macrocomparative research. With respect to the latter, Shalev suggests three avenues for advance: (1) improve regression through technical refinement; (2) combine regression with case studies (triangulation); (3) turn to alternative methods of quantitative analysis such as multivariate tables and graphs or factor analysis (substitution). I want to suggest some additional ways in which the use of regression in macrocomparative analysis could be improved. None involves technical refinement. Instead, most have to do with relatively basic aspects of quantitative analysis that seem, in my view, to be commonly ignored or overlooked. LOOK AT THE DATA Shalev’s third suggested path for progress consists of using tables, graphs, and tree diagrams to examine causal hierarchy and complexity and to identify cases meriting more indepth scrutiny. This should be viewed not as (or at least not solely as) a substitute for regression but rather as a critical component of regression analysis. All of us were (I hope) taught in our first
Geographic Boundaries as Regression Discontinuities
, 2013
"... Political scientists often turn to natural experiments to draw causal inferences with observational data. Recently, the regression discontinuity design (RD) has become one popular type of natural experiment given its relatively weak assumptions. We study a special type of regression discontinuity de ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Political scientists often turn to natural experiments to draw causal inferences with observational data. Recently, the regression discontinuity design (RD) has become one popular type of natural experiment given its relatively weak assumptions. We study a special type of regression discontinuity design where the discontinuity in treatment assignment is geographic. In this design, which we call the Geographic Regression Discontinuity (GRD) design, a geographic or administrative boundary splits units into treated and control areas, and analysts make the case that the division into treated and control areas occurs in an asif random fashion. We show how this design is equivalent to a standard RD with two cutoffs, but we also clarify several methodological differences that arise in geographical contexts. We also offer a method for estimation for geographicallylocated treatment effects that can also be used to validate the identification assumptions using observable pretreatment characteristics. We illustrate our methodological framework with a reexamination of the effects of political advertisements on voter turnout during a presidential campaign, exploiting the exogenous variation in the volume of presidential ads that is created by media market boundaries.