## Checking hierarchical models (2003)

Citations: | 4 - 2 self |

### BibTeX

@MISC{Bayarri03checkinghierarchical,

author = {M. J. Bayarri and M. E. Castellanos},

title = {Checking hierarchical models},

year = {2003}

}

### OpenURL

### Abstract

Abstract: Hierarchical models are increasingly used in many applications. Along with this increase use comes a desire to investigate whether the model is compatible with the observed data. Bayesian methods are well suited to eliminate the many (nuisance) parameters in these complicated models; in this paper we investigate Bayesian methods for model checking. Since we contemplate model checking as a preliminary, exploratory analysis, we concentrate in objective Bayesian methods in which careful specification of an informative prior distribution is avoided. Numerous examples are given and different proposals are investigated. Key words and phrases: Model checking; model criticism; objective Bayesian methods; p-values. 1.

### Citations

3546 |
Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing
- Benjamini, Hochberg
- 1995
(Show Context)
Citation Context ...ng with p-values, adjustment is most likely done by classical methods (controlling either the family-wise error rate, as the Bonferroni method, or the false discovery rate and related methods, as the =-=Benjamini and Hochberg, 1995-=-, method). None of these methods is full proof and the danger exists that they also result in a lack of power. O’Hagan (2003) Example (cont.): We compute the conflict p-values for O’Hagan data set. We... |

1546 | Bayesian Data Analysis - Gelman, Carlin, et al. - 1997 |

1429 |
Statistical Decision Theory and Bayesian Analysis
- Berger
- 1985
(Show Context)
Citation Context ...e simply do not address choice of T in this paper). As measures of conflict in 3, we explore the two best known measures of surprise, namely the p-value and the relative predictive surprise, RPS (see =-=Berger, 1985-=-, section 4.7.2) used (with variants) by many authors. These two measures are defined as: p = P r h(·) (t(X) ≥ t(xobs)), (1.1) RP S = h(t(xobs)) . sup h(t) t (1.2) Note that small values of (1.1) and ... |

1194 |
Bayesian Theory
- Bernardo, Smith
- 2000
(Show Context)
Citation Context ... 5, for O’Hagan data set. θ1 θ2 θ3 θ4 θ5 O’Hagan priors 0.43 0.14 0.22 0.46 4.81 Non informative Priors 0.16 0.09 0.11 0.16 1.36 eralizing, cross-validation methods (see Gelfand, Dey and Chang, 1992; =-=Bernardo and Smith, 1994-=-, Chap.6). In cross-validation, to check adequacy of group i, data in group i, Xi, is used to compute the ‘surprise’ statistic (or diagnostic measure), whereas the rest of the data, X−i, is used to tr... |

134 | Bayesianly justifiable and relevant frequency calculations for the applied statistician - Rubin |

120 |
Sampling and Bayes’ inference in scientific modelling and robustness with discussion
- Box
(Show Context)
Citation Context ...The natural Bayesian choice for h(·) is the priorsFILL IN A SHORT RUNNING TITLE 3 predictive distribution, in which the parameters get naturally integrated out with respect to the prior distribution (=-=Box, 1980-=- pioneered use of p-values computed in the prior predictive for Bayesian model criticism). However, this requires a fairly informative prior distribution which might not be desirable for model checkin... |

105 |
Model Determination Using Predictive Distributions with Implementation via Sampling-Based Methods.J.M
- Gelfand, Dey, et al.
- 1992
(Show Context)
Citation Context ...sterior medians of ci, i = 1, . . . , 5, for O’Hagan data set. 5.4 ‘Conflict’ p-value Marshall and Spiegelhalter (2001) proposed this approach based on, and generalizing, crossvalidation methods (see =-=Gelfand et al., 1992-=-; Bernardo and Smith, 1994). In cross-validation, to check adequacy of group i, data in group i, Xi, is used to compute the ‘surprise’ statistic (or diagnostic measure), whereas the rest of the data, ... |

46 |
P-values for composite null models
- Bayarri, Berger
- 2000
(Show Context)
Citation Context ...ry desirable property, namely having the same interpretation across models. The uniformity of p-values has often taken as their defining characteristic (more discussion and references can be found in =-=Bayarri and Berger, 2000-=-). In this section we simulate the null sampling distribution of p EB prior (X), ppost(X) y pppp(X), when X comes from a hierarchical normal-normal model as defined in (3.1). (We do not show the behav... |

36 |
Bayes and Empirical Bayes Methods for Data Analysis, Second Edition. Chapman and Hall/CRC: Boca Raton
- Carlin, Louis
- 2000
(Show Context)
Citation Context ...this purpose (it would produce an improper h(·)). 2.1 Empirical Bayes (plug-in) measures This is the simplest proposal, very intuitive and frequently used in empirical Bayes analysis (see for example =-=Carlin and Louis, 2000-=-). It simply consists in replacing the unknown η in π(θ | η) by an estimate (we use the MLE, but moment estimates are often used as well). In this proposal, θ is integrated out with respect to π EB (θ... |

35 | The use of the concept of a future observation in goodness-of-fit problems - Guttman - 1967 |

25 |
Quantifying surprise in the data and model verification
- Bayarri, Berger
- 1999
(Show Context)
Citation Context ...ed situations, where the classical models would not apply. With their widespread use, comes along an increased need to check the adequacy of such models to the observed data. Recent Bayesian methods (=-=Bayarri and Berger, 1999-=-, 2000) have shown considerable promise in checking one-level models, specially in nonstandard situations in which parameter-free testing statistics are not known. In this paper we show how these meth... |

12 | Assessing normality in random effects models - Lange, Ryan - 1989 |

11 | A simulation-intensive approach for checking hierarchical models - Dey, Gelfand, et al. - 1998 |

10 | Approximate cross-validatory predictive checks in disease mapping models - Marshall, Spiegelhalter - 2003 |

5 |
Bayesian measures of surprise for outlier detection
- Bayarri, Morales
- 2003
(Show Context)
Citation Context ...on. To avoid computation of � θcMLE, which can be rather time consuming, we use instead an estimate � θc which we expect to be close enough (for our purposes) to �θcMLE for this model and this T (see =-=Bayarri and Morales, 2003-=-). In particular, we take all components to be equal and given by �θc = � I−1 l=1 X (l·) I − 1 where (X (1·), . . . , X (I·)) denote the group means sorted in ascendent order. That is, we simple remov... |

3 | Assessing Normality in Random Effects Models,” The Annals of Statistics 17 - Lange, Ryan - 1989 |

2 | Measures of surprise in Bayesian analysis - Bayarri, Berger - 1997 |

2 | A comparison between p-values for goodness-offit checking - Bayarri, Castellanos - 2001 |

2 | Which ‘base’ distribution for model criticism? Discussion on HSSS model criticism by A. O’hagan - Bayarri - 2003 |

2 | Medidas de Sorpresa para bondad de ajuste. Master Thesis - Castellanos - 1999 |

2 |
Diagnósticos Bayesianos de Modelos. PhD. Dissertation. Dpto. Estadística y Mat. Aplicada. Universidad Miguel Hernández
- Castellanos
- 2002
(Show Context)
Citation Context ...notes the standard normal distribution function. The posterior Empirical Bayes measures can similarly be derived in close-form, but they are of much less interest and we do not produce them here (see =-=Castellanos, 2002-=-). The inadequacies of m EB post for testing the null model can already be seen in the above formulae, but they are more evident in the particular homocedastic, balanced case: σ 2 i = σ2 and ni = n ∀ ... |

2 | HSSS model criticism (with discussion),” in “Highly Structured Stochastic Systems - O’Hagan - 2003 |

1 | A SHORT RUNNING TITLE 31 - IN - 1989 |

1 | Some comments on model criticism, discussion on HSSS model criticism - Gelfand - 2003 |

1 | Discussion on HSSS model criticism - Bayarri - 2003 |

1 |
Generalized Monte Carlo Significance
- Besag, Clifford
- 1989
(Show Context)
Citation Context ...tributions of d given R data sets x r , for r = 1, . . . , R, generated from the (null) predictive model; note that the method requires proper priors. Comparison is carried out via Monte Carlo Tests (=-=Besag and Clifford, 1989-=-). Letting x r , for r = 0 denote the observed data xobs, their algorithm is as follows: - For each posterior distribution of d given x r , r = 0, . . . R, compute the vector of quantiles q (r) = (q (... |

1 | Discussion to HSSS model criticism - Gelfand - 2003 |