Results 1  10
of
21
Optimal Predictive Model Selection
 Ann. Statist
, 2002
"... Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss.
The interplay of bayesian and frequentist analysis
 Statist. Sci
, 2004
"... Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fi ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues. Key words and phrases: Admissibility; Bayesian model checking; conditional frequentist; confidence intervals; consistency; coverage; design; hierarchical models; nonparametric
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Model choice in time series studies of air pollution and mortality
, 2004
"... Summary. Multicity time series studies of particulate matter and mortality and morbidity have provided evidence that daily variation in air pollution levels is associated with daily variation in mortality counts.These findings served as key epidemiological evidence for the recent review of the US na ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Summary. Multicity time series studies of particulate matter and mortality and morbidity have provided evidence that daily variation in air pollution levels is associated with daily variation in mortality counts.These findings served as key epidemiological evidence for the recent review of the US national ambient air quality standards for particulate matter. As a result, methodological issues concerning time series analysis of the relationship between air pollution and health have attracted the attention of the scientific community and critics have raised concerns about the adequacy of current model formulations. Time series data on pollution and mortality are generally analysed by using loglinear, Poisson regression models for overdispersed counts with the daily number of deaths as outcome, the (possibly lagged) daily level of pollution as a linear predictor and smooth functions of weather variables and calendar time used to adjust for timevarying confounders. Investigators around the world have used different approaches to adjust for confounding, making it difficult to compare results across studies. To date, the statistical properties of these different approaches have not been comprehensively compared.To address these issues, we quantify and characterize model uncertainty and model choice in adjusting for seasonal and longterm trends in time series models of air pollution and mortality. First, we
Consistency of objective Bayes factors as the model dimension grows. The Annals of Statistics 2010
"... In the class of normal regression models with a finite number of regressors, and for a wide class of prior distributions, a Bayesian model selection procedure based on the Bayes factor is consistent [Casella and Moreno J. Amer. Statist. Assoc. 104 (2009) 1261–1271]. However, in models where the numb ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In the class of normal regression models with a finite number of regressors, and for a wide class of prior distributions, a Bayesian model selection procedure based on the Bayes factor is consistent [Casella and Moreno J. Amer. Statist. Assoc. 104 (2009) 1261–1271]. However, in models where the number of parameters increases as the sample size increases, properties of the Bayes factor are not totally understood. Here we study consistency of the Bayes factors for nested normal linear models when the number of regressors increases with the sample size. We pay attention to two successful tools for model selection [Schwarz Ann. Statist. 6 (1978) 461–464] approximation to the Bayes factor, and the Bayes factor for intrinsic priors [Berger and Pericchi
Bayesian Model Selection in Factor Analytic Models
"... Factor analytic models are widely used in social science applications to study latent traits, such as intelligence, creativity, stress and depression, that cannot be accurately measured with a single variable. In recent years, there has been a rise in the popularity of factor models due to their fle ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Factor analytic models are widely used in social science applications to study latent traits, such as intelligence, creativity, stress and depression, that cannot be accurately measured with a single variable. In recent years, there has been a rise in the popularity of factor models due to their flexibility in characterizing multivariate data. For example, latent factor
Efficient Bayesian model averaging in factor analysis
 Duke University
, 2006
"... Summary. Although factor analytic models have proven useful for covariance structure modeling and dimensionality reduction in a wide variety of applications, a challenging problem is uncertainty in the number of latent factors. This article proposes an efficient Bayesian approach for model selectio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Summary. Although factor analytic models have proven useful for covariance structure modeling and dimensionality reduction in a wide variety of applications, a challenging problem is uncertainty in the number of latent factors. This article proposes an efficient Bayesian approach for model selection and averaging in hierarchical models having one or more factor analytic components. In particular, the approach relies on a method for embedding each of the smaller models within the largest possible model. Bayesian computation can proceed within the largest model, while moving between submodels based on posterior model probabilities. The approach represents a type of parameter expansion, as one always samples within an encompassing model, incorporating extra parameters and latent variables when a smaller model is true. This results in a highly efficient stochastic search factor selection algorithm (SSFS) for identifying good factor models and performing modelaveraged inferences. The approach is illustrated using simulated examples and a toxicology application.