Results 1  10
of
12
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 263 (14 self)
 Add to MetaCart
(Show Context)
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 181 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Can one estimate the conditional distribution of postmodelselection estimators? Working paper
, 2003
"... We consider the problem of estimating the conditional distribution of a postmodelselection estimator where the conditioning is on the selected model. The notion of a postmodelselection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model select ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating the conditional distribution of a postmodelselection estimator where the conditioning is on the selected model. The notion of a postmodelselection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion such as AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by leastsquares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate this distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for this distribution. Similar impossibility results are also obtained for the conditional distribution of linear functions (e.g., predictors) of the postmodelselection estimator.
When should epidemiologic regressions use random coefficients
 Biometrics
"... SUMMARY. Regression models with random coefficients arise naturally in both frequentist and Bayesian approaches to estimation problems. They are becoming widely available in standard computer packages under the headings of generalized linear mixed models, hierarchical models, and multilevel models. ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
SUMMARY. Regression models with random coefficients arise naturally in both frequentist and Bayesian approaches to estimation problems. They are becoming widely available in standard computer packages under the headings of generalized linear mixed models, hierarchical models, and multilevel models. I here argue that such models offer a more scientifically defensible framework for epidemiologic analysis than the fixedeffects models now prevalent in epidemiology. The argument invokes an antiparsimony principle attributed to L. J. Savage, which is that models should be rich enough to reflect the complexity of the relations under study. It also invokes the countervailing principle that you cannot estimate anything if you try to estimate everything (often used to justify parsimony). Regression with random coefficients offers a rational compromise between these principles as well as an alternative to analyses based on standard variableselection algorithms and their attendant distortion of uncertainty assessments. These points are illustrated with an analysis of data on diet, nutrition, and breast cancer.
2010a Statistical inference after model selection
 Journal of Quantitative Criminology
"... Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence in ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence intervals computed for a “final ” model. In this paper, we examine such practices and show how they are typically misguided. The parameters being estimated are no longer well defined, and postmodelselection sampling distributions are mixtures
A statistical method for empirical testing of competing theories
, 2010
"... Empirical testing of competing theories lies at the heart of social science research. We demonstrate that a wellknown class of statistical models, called finite mixture models, provides an effective way of rival theory testing. In the proposed framework, each observation is assumed to be generated ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Empirical testing of competing theories lies at the heart of social science research. We demonstrate that a wellknown class of statistical models, called finite mixture models, provides an effective way of rival theory testing. In the proposed framework, each observation is assumed to be generated either from a statistical model implied by one of the competing theories or more generally from a weighted combination of multiple statistical models under consideration. Researchers can then estimate the probability that a specific observation is consistent with each rival theory. By modeling this probability with covariates, one can also explore the conditions under which a particular theory applies.We discuss a principled way to identify a list of observations that are statistically significantly consistent with each theory and propose measures of the overall performance of each competing theory. We illustrate the relative advantages of our method over existing methods through empirical and simulation studies. Empirical testing of competing theories lies at the heart of social science research. Since there typically exist alternative theories explaining the same phenomena, researchers can often increase the plausibility of their theory by empirically demonstrating its superior explanatory power over rival theories. In political
On Selecting Regressors To Maximize Their Significance
, 1998
"... A common problem in applied regression analysis is to select the variables that enter a linear regression. Examples are selection among capital stock series constructed with different depreciation assumptions, or use of variables that depend on unknown parameters, such as BoxCox transformations, li ..."
Abstract
 Add to MetaCart
A common problem in applied regression analysis is to select the variables that enter a linear regression. Examples are selection among capital stock series constructed with different depreciation assumptions, or use of variables that depend on unknown parameters, such as BoxCox transformations, linear splines with parametric knots, and exponential functions with parametric decay rates. It is often computationally convenient to estimate such models by least squares, with variables selected from possible candidates by enumeration, grid search, or GaussNewton iteration to maximize their conventional least squares significance level; term this method Prescreened Least Squares (PLS). This note shows that PLS is equivalent to direct estimation by nonlinear least squares, and thus statistically consistent under mild regularity conditions. However, standard errors and test statistics provided by least squares are biased. When explanatory variables are smooth in the parameters that index ...
Invited Commentary Invited Commentary: Variable Selection versus Shrinkage in the Control of Multiple Confounders
, 2007
"... After screening out inappropriate or doubtful covariates on the basis of background knowledge, one may still be left with many potential confounders. It is then tempting to use statistical variableselection methods to reduce the number used for adjustment. Nonetheless, there is no agreement on how ..."
Abstract
 Add to MetaCart
(Show Context)
After screening out inappropriate or doubtful covariates on the basis of background knowledge, one may still be left with many potential confounders. It is then tempting to use statistical variableselection methods to reduce the number used for adjustment. Nonetheless, there is no agreement on how selection should be conducted, and it is well known that conventional selection methods lead to confidence intervals that are too narrow and p values that are too small. Furthermore, theory and simulation evidence have found no selection method to be uniformly superior to adjusting for all wellmeasured confounders. Nonetheless, control of all measured confounders can lead to problems for conventional modelfitting methods. When these problems occur, one can apply modern techniques such as shrinkage estimation, exposure modeling, or hybrids that combine outcome and exposure modeling. No selection or special software is needed for most of these techniques. It thus appears that statistical confounder selection may be an unnecessary complication in most regression analyses of effects. Bayesian methods; collapsibility; confounding; epidemiologic methods; regression; shrinkage; validity; variable selection Epidemiologists often have available a set of many potential confounders for a targeted exposuredisease relation and may attempt to select or delete variables from this set