Results 1 
6 of
6
Bayesian Model Selection in Social Research (with Discussion by Andrew Gelman & Donald B. Rubin, and Robert M. Hauser, and a Rejoinder)
 SOCIOLOGICAL METHODOLOGY 1995, EDITED BY PETER V. MARSDEN, CAMBRIDGE,; MASS.: BLACKWELLS.
, 1995
"... It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a singl ..."
Abstract

Cited by 253 (19 self)
 Add to MetaCart
It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection and accounting for model uncertainty is presented. Implementing this is straightforward using the simple and accurate BIC approximation, and can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P values and standard model selection procedures based on them. It also allows easy comparison of nonnested models, and permits the quantification of the evidence for a null hypothesis...
Bayesian modelbuilding by pure thought: some principles and examples
 Stat. Sinica
, 1996
"... Abstract: In applications, statistical models are often restricted to what produces reasonable estimates based on the data at hand. In many cases, however, the principles that allow a model to be restricted can be derived theoretically, in the absence of any data and with minimal applied context. We ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract: In applications, statistical models are often restricted to what produces reasonable estimates based on the data at hand. In many cases, however, the principles that allow a model to be restricted can be derived theoretically, in the absence of any data and with minimal applied context. We illustrate this point with three wellknown theoretical examples from spatial statistics and time series. First, we showthatan autoregressive model for local averages violates a principle of invariance under scaling. Second, we showhowtheBayesian estimate of a strictlyincreasing time series, using a uniform prior distribution, depends on the scale of estimation. Third, we interpret local smoothing of spatial lattice data as Bayesian estimation and show why uniform local smoothing does not make sense. In various forms, the results presented here have been derived in previous work � our contribution is to draw out some principles that can be derived theoretically, even though in the past they may have been presented in detail in the context of speci c examples. Key words and phrases: ARMA, Bayesian statistics, conditional autoregression, image, scaling, sieve, spatial smoothing, spatial statistics, time series.
A SimulationIntensive Approach for Checking Hierarchical Models
 TEST
, 1998
"... Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addr ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addresses the entire model specification, model failures can occur at each hierarchical stage. Such failures include outliers, mean structure errors, dispersion misspecification, and inappropriate exchangeabilities. We propose another approach which is entirely simulation based. It only requires the model specification and that, for a given data set, one be able to simulate draws from the posterior under the model. By replicating a posterior of interest using data obtained under the model we can "see" the extent of variability in such a posterior. Then, we can compare the posterior obtained under the observed data with this medley of posterior replicates to ascertain whether the former is in agr...
Avoiding model selection in Bayesian social research
 Sociological Methodology
, 1994
"... Introduction Raftery's paper addresses two important problems in the statistical analysis of social science data: (1) choosing an appropriate model when so much data are available that standard Pvalues reject all parsimonious models; and (2) making estimates and predictions when there are not enou ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Introduction Raftery's paper addresses two important problems in the statistical analysis of social science data: (1) choosing an appropriate model when so much data are available that standard Pvalues reject all parsimonious models; and (2) making estimates and predictions when there are not enough data available to fit the desired model using standard techniques. For both problems, we agree with Raftery that classical frequentist methods fail and that Raftery's suggested methods based on BIC can point in better directions. Nevertheless, we disagree with his solutions because, in principle, they are still directed offtarget and only by serendipity manage to hit the target in special circumstances. Our primary criticisms of Raftery's proposals are that (1) he promises the impossible: the selection of a model that is adequate for specific purposes without consideration of those purposes; and (2) he uses the same limited tool for model averaging as for model selection, thereby
Bayesian Computation for Parametric Models of Heteroscedasticity in the Linear Model
, 1996
"... In the linear model with unknown variances, one can often model the heteroscedasticity as var(y i ) = oe 2 f(w i ; `); where f is a fixed function, w i are the "weights" for the problem and ` is an unknown parameter (f(w i ; `) = w \Gamma` i is a traditional choice). We show how to do a fully B ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In the linear model with unknown variances, one can often model the heteroscedasticity as var(y i ) = oe 2 f(w i ; `); where f is a fixed function, w i are the "weights" for the problem and ` is an unknown parameter (f(w i ; `) = w \Gamma` i is a traditional choice). We show how to do a fully Bayesian computation in this simple linear setting and also for a hierarchical model. The full Bayesian computation has the advantage that we are able to average over our uncertainty in ` instead of using a point estimate. We carry out the computations for a problem involving forecasting U.S. Presidential elections, looking at different choices for f and the effects on both estimation and prediction. 1 Introduction In both the econometrics and statistics literature, a standard way to model heteroscedasticity in regression is through a parametric model for the unequal variances, as described in many places, e.g. Amemiya (1985), Greene (1990), Judge et al. (1985), Carroll & Ruppert (1988). M...
Model Checking and Model Improvment
"... Introduction Markov chain simulation, and Bayesian ideas in general, allow a wonderfully flexible treatment of probability models. In this chapter, we discuss two related ideas: (1) checking the fit of a model to data, and (2) improving a model by adding substantively meaningful parameters. Model i ..."
Abstract
 Add to MetaCart
Introduction Markov chain simulation, and Bayesian ideas in general, allow a wonderfully flexible treatment of probability models. In this chapter, we discuss two related ideas: (1) checking the fit of a model to data, and (2) improving a model by adding substantively meaningful parameters. Model improvement by expansion is also an important technique in assessing the sensitivity of inferences to untestable assumptions. We illustrate both these methods with an example of a mixture model fit to experimental data from psychology using the Gibbs sampler. Any Markov chain simulation is conditional on an assumed probability model. As the applied chapters of this book illustrate, these models can be complicated and generally rely on inherently unverifiable assumptions. From a practical standpoint, then, it is important to explore how inferences of substantive interest depend on the assumptions, and to test the assumptions where possible. 0.2 Model checking using posterior