## On Bayesian model assessment and choice using cross-validation predictive densities (2001)

### Cached

### Download Links

Citations: | 7 - 7 self |

### BibTeX

@TECHREPORT{Vehtari01onbayesian,

author = {Aki Vehtari and Jouko Lampinen},

title = {On Bayesian model assessment and choice using cross-validation predictive densities},

institution = {},

year = {2001}

}

### OpenURL

### Abstract

We consider the problem of estimating the distribution of the expected utility of the Bayesian model (expected utility is also known as generalization error). We use the cross-validation predictive densities to compute the expected utilities. We demonstrate that in flexible non-linear models having many parameters, the importance sampling approximated leave-one-out cross-validation (IS-LOO-CV) proposed in (Gelfand et al., 1992) may not work. We discuss how the reliability of the importance sampling can be evaluated and in case there is reason to suspect the reliability of the importance sampling, we suggest to use predictive densities from the k-fold crossvalidation (k-fold-CV). We also note that the k-fold-CV has to be used if data points have certain dependencies. As the k-fold-CV predictive densities are based on slightly smaller data sets than the full data set, we use a bias correction proposed in (Burman, 1989) when computing the expected utilities. In order to assess the reliability of the estimated expected utilities, we suggest a quick and generic approach based on the Bayesian bootstrap for obtaining samples from the distributions of the expected utilities. Our main goal is to estimate how good (in terms of application field) the predictive ability of the model is, but the distributions of the expected utilities can also be used for comparing different models. With the proposed method, it is easy to compute the probability that one method has better expected utility than some other method. If the predictive likelihood is used as a utility (instead