Results 1 
7 of
7
Bayesian checking of the second level of hierarchical models
 Statist. Sci
, 2007
"... Abstract. Hierarchical models are increasingly used in many applications. Along with this increased use comes a desire to investigate whether the model is compatible with the observed data. Bayesian methods are well suited to eliminate the many (nuisance) parameters in these complicated models; in t ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Abstract. Hierarchical models are increasingly used in many applications. Along with this increased use comes a desire to investigate whether the model is compatible with the observed data. Bayesian methods are well suited to eliminate the many (nuisance) parameters in these complicated models; in this paper we investigate Bayesian methods for model checking. Since we contemplate model checking as a preliminary, exploratory analysis, we concentrate on objective Bayesian methods in which careful specification of an informative prior distribution is avoided. Numerous examples are given and different proposals are investigated and critically compared. Key words and phrases: Model checking, model criticism, objective
Diagnostic Checks for DiscreteData Regression Models Using Posterior Predictive Simulations
, 1997
"... Model checking with discrete data regressions can be difficult because usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessoffit ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Model checking with discrete data regressions can be difficult because usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessoffit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fit to a historical data set on behavioral learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: (a) structured displays of the entire data set, (b) general discrepancy variables based on plots of binned or smoothed residuals versus predictors, and (c) specific discrepancy variables created based on the particul...
Measures of Surprise in Bayesian Analysis
 Duke University
, 1997
"... Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Strict Bayesian analysis calls for an explicit specification of all possible alternatives to H 0 so Bayesians have not made routine use of measures of surprise. In this report we CRITICALLY REVIEw the proposals that have been made in this regard. We propose new modifications, stress the connections with robust Bayesian analysis and discuss the choice of suitable predictive distributions which allow surprise measures to play their intended role in the presence of nuisance parameters. We recommend either the use of appropriate likelihoodratio type measures or else the careful calibration of pvalues so that they are closer to Bayesian answers. Key words and phrases. Bayes factors; Bayesian pvalues; Bayesian robustness; Conditioning; Model checking; Predictive distributions. 1.
49, Part 2, pp. 247±268 Diagnostic checks for discrete data regression models using posterior predictive simulations
, 1997
"... Summary. Model checking with discrete data regressions can be dif®cult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of good ..."
Abstract
 Add to MetaCart
Summary. Model checking with discrete data regressions can be dif®cult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessof®t tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models ®tted to a historical data set on behavioural learning. We then discuss the general applicability of our ®ndings in the context of a recent applied example on which we have worked. We ®nd that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: structured displays of the entire data set, general discrepancy variables based on plots of binned or smoothed residuals versus predictors and speci®c discrepancy variables created on the basis of the particular concerns arising in an application. Plots of binned residuals are especially easy to use because their predictive distributions under the model are suf®ciently simple that model checks can often be made implicitly. The following discrepancy variables did not work well: scatterplots of latent residuals de®ned from an underlying continuous model and quantile±quantile plots of these residuals.
Title of dissertation: POSTERIOR PREDICTIVE MODEL CHECKING FOR MULTIDIMENSIONALITY IN ITEM RESPONSE THEORY AND BAYESIAN NETWORKS
, 2006
"... If data exhibit a dimensional structure more complex than what is assumed, key conditional independence assumptions of the hypothesized model do not hold. The current work pursues posterior predictive model checking, a flexible family of Bayesian model checking procedures, as a tool for criticizing ..."
Abstract
 Add to MetaCart
If data exhibit a dimensional structure more complex than what is assumed, key conditional independence assumptions of the hypothesized model do not hold. The current work pursues posterior predictive model checking, a flexible family of Bayesian model checking procedures, as a tool for criticizing models in light of inadequately modeled dimensional structure. Factors hypothesized to influence dimensionality and dimensionality assessment are couched in conditional covariance theory and conveyed via geometric representations of multidimensionality. These factors and their hypothesized effects motivate a simulation study that investigates posterior predictive model checking in the context of item response theory for dichotomous observables. A unidimensional model is fit to data that follow compensatory or conjunctive multidimensional item response models to assess the utility of conducting posterior predictive model checking. Discrepancy measures are formulated at the level of individual items and pairs of items. A second study draws from the results of the first study and investigates the model checking techniques in the context of multidimensional Bayesian networks with inhibitory effects. Key findings include support for the