Results 1  10
of
37
Toward evidencebased medical statistics. 2: The Bayes factor
 Annals of Internal Medicine
, 1999
"... ..."
Nonparametric Methods for Doubly Truncated Data
 Journal of the American Statistical Association
, 1999
"... Truncated data plays an important role in the statistical analysis of astronomical ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Truncated data plays an important role in the statistical analysis of astronomical
A SimulationIntensive Approach for Checking Hierarchical Models
 TEST
, 1998
"... Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addr ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addresses the entire model specification, model failures can occur at each hierarchical stage. Such failures include outliers, mean structure errors, dispersion misspecification, and inappropriate exchangeabilities. We propose another approach which is entirely simulation based. It only requires the model specification and that, for a given data set, one be able to simulate draws from the posterior under the model. By replicating a posterior of interest using data obtained under the model we can "see" the extent of variability in such a posterior. Then, we can compare the posterior obtained under the observed data with this medley of posterior replicates to ascertain whether the former is in agr...
Simulation Based Model Checking for Hierarchical Models
 Test
, 1995
"... Recent computational advances have made it feasible to t hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addres ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Recent computational advances have made it feasible to t hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addresses the entire model specication, model failures can occur at each hierarchical stage. Such failures include outliers, mean structure errors, dispersion misspecication, and inappropriate exchangeabilities. We propose another approach which is entirely simulation based. It only requires the model specication and that, for a given data set, one be able to simulate draws from the posterior under the model. By replicating a posterior of interest using data obtained under the model we can \see " the extent of variability in such a posterior. Then, we can compare the posterior obtained under the observed data with this medley of posterior replicates to ascertain whether the former is in agreement with them and accordingly, whether it is plausible that the observed data came from the proposed model. This suggests the large scale use of Monte Carlo tests, each focusing on a potential model failure. It thus suggests the possibility of examining not only the overall adequacy of the hierarchical model but, using suitable posteriors, the adequacy of each stage. 1 This raises the question of when individual stages are separable and checkable which we explore in some detail. Finally, we develop this strategy in the context of generalized linear mixed models and oer a simulation study to demonstrate its capabilities.
Empirical Bayes and item clustering effects in latent variable hierarchical models: A case study from the National Assessment of Educational Progress
 Journal of the American Statistical Association
, 2002
"... ..."
Objective Bayesian analysis of contingency tables
, 2002
"... The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for t ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for testing independence they can be greatly improved on by socalled intrinsic priors. We also argue that because there is no realistic situation that corresponds to the case of conditioning on both margins of a contingency table, the proper analysis of an a × b contingency table should only condition on either the table total or on only one of the margins. The posterior probabilities from the intrinsic priors provide reasonable answers in these cases. Examples using simulated and real data are given.
Empirical Bayes and itemclustering effects in a latent variable hierarchical model: A case study from the National Assessment of Educational Progress
 Journal of the American Statistical Association
, 2002
"... Empirical Bayes regression procedures are often used in educational and psychological testing as extensions to latent variables models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. The NAEP applies empirical Bayes methods to models fro ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Empirical Bayes regression procedures are often used in educational and psychological testing as extensions to latent variables models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. The NAEP applies empirical Bayes methods to models from item response theory to calibrate student responses to questions of varying difficulty. Due partially to the limited computing technology that existed when the NAEP was first conceived, NAEP analyses are carried out using a twostage estimation procedure that ignores uncertainty about some model parameters. Furthermore, the item response theory model that the NAEP uses ignores the effect of item clustering created by the design of a test form. Using Markov chain Monte Carlo, we simultaneously estimate all parameters of an expanded model that considers item clustering to investigate the impact of item clustering and ignoring uncertainty about model parameters on an important outcome measure that the NAEP reports. Ignoring these two effects causes substantial underestimation of standard errors and induces a modest bias in location estimates.
Raindrop plots: a new way to display collections of likelihoods and distributions
 American Statistician
, 2003
"... In a variety of settings, it is desirable to display a collection of likelihoods over a common interval. One approach is simply to superimpose the likelihood curves. However, where there are more than a handful of curves, such displays are extremely difficult to decipher. An alternative is simply t ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In a variety of settings, it is desirable to display a collection of likelihoods over a common interval. One approach is simply to superimpose the likelihood curves. However, where there are more than a handful of curves, such displays are extremely difficult to decipher. An alternative is simply to display a point estimate with a confidence interval, corresponding to each likelihood. However, these may be inadequate when the likelihood is not approximately normal, as can occur with small sample sizes or nonlinear models. A second dimension is needed to gauge the relative plausibility of different parameter values. We introduce the raindrop plot, a shaded figure over the range of parameter values having loglikelihood greater than some cutoff, with height varying proportional to the difference between the loglikelihood and the cutoff. In the case of a normal likelihood, this produces a reflected parabola so that deviations from normality can be easily detected. An analogue of the raindrop plot can also be used to display estimated random effect distributions, posterior distributions, and predictive distributions.