Results 11  20
of
25
On Bayesian model assessment and choice using crossvalidation predictive densities
, 2001
"... We consider the problem of estimating the distribution of the expected utility of the Bayesian model (expected utility is also known as generalization error). We use the crossvalidation predictive densities to compute the expected utilities. We demonstrate that in flexible nonlinear models having ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
We consider the problem of estimating the distribution of the expected utility of the Bayesian model (expected utility is also known as generalization error). We use the crossvalidation predictive densities to compute the expected utilities. We demonstrate that in flexible nonlinear models having many parameters, the importance sampling approximated leaveoneout crossvalidation (ISLOOCV) proposed in (Gelfand et al., 1992) may not work. We discuss how the reliability of the importance sampling can be evaluated and in case there is reason to suspect the reliability of the importance sampling, we suggest to use predictive densities from the kfold crossvalidation (kfoldCV). We also note that the kfoldCV has to be used if data points have certain dependencies. As the kfoldCV predictive densities are based on slightly smaller data sets than the full data set, we use a bias correction proposed in (Burman, 1989) when computing the expected utilities. In order to assess the reliability of the estimated expected utilities, we suggest a quick and generic approach based on the Bayesian bootstrap for obtaining samples from the distributions of the expected utilities. Our main goal is to estimate how good (in terms of application field) the predictive ability of the model is, but the distributions of the expected utilities can also be used for comparing different models. With the proposed method, it is easy to compute the probability that one method has better expected utility than some other method. If the predictive likelihood is used as a utility (instead
Bayesian Neural Networks with Correlating Residuals
 In IJCNN'99: Proceedings of the 1999 International Joint Conference on Neural Networks. IEEE
, 1999
"... Usually in multivariate regression problem it is' assumed that residuals' of outputs' are independent of each other. In many applications a more realistic model would allow dependencies between the outputs'. In this paper we show how a Bayesian treatment using Markov Chain Monte Carlo (MCMC) method ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Usually in multivariate regression problem it is' assumed that residuals' of outputs' are independent of each other. In many applications a more realistic model would allow dependencies between the outputs'. In this paper we show how a Bayesian treatment using Markov Chain Monte Carlo (MCMC) method can allow for a full covariance matrix with Multi Layer Perceptton (MLP) neural networks'.
Actuarial modeling with MCMC and BUGS
 North American Actuarial Journal
, 2001
"... In this paper, the author reviews some aspects of Bayesian data analysis and discusses how a variety of actuarial models can be implemented and analyzed in accordance with the Bayesian paradigm using Markov chain Monte Carlo techniques via the BUGS (Bayesian inference Using Gibbs Sampling) suite of ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, the author reviews some aspects of Bayesian data analysis and discusses how a variety of actuarial models can be implemented and analyzed in accordance with the Bayesian paradigm using Markov chain Monte Carlo techniques via the BUGS (Bayesian inference Using Gibbs Sampling) suite of software packages. The emphasis is placed on actuarial loss models, but other applications are referenced, and directions are given for obtaining documentation for additional worked examples on the World Wide Web. 1.
Spatially Correlated Allocation Models for Count Data
, 2000
"... Spatial heterogeneity of count data on a rare phenomenon occurs commonly in many domains of application, in particularly in disease mapping. We present new methodology to analyse such data, based on a hierarchical allocation model. We assume that the counts follow a Poisson model at the lowest le ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Spatial heterogeneity of count data on a rare phenomenon occurs commonly in many domains of application, in particularly in disease mapping. We present new methodology to analyse such data, based on a hierarchical allocation model. We assume that the counts follow a Poisson model at the lowest level of the hierarchy, and introduce a finite mixture model for the Poisson rates at the next level. The novelty lies in the allocation model to the mixture components, which follows a spatially correlated process, the Potts model, and in treating the number of components of the spatial mixture as unknown. Inference is performed in a Bayesian framework using reversible jump MCMC. The model introduced can be viewed as a Bayesian semiparametric approach to specifying flexible spatial distribution in hierarchical models. It could also be used in contexts where the spatial mixture subgroups are themselves of interest, as in health care monitoring. Performance of the model and comparison wi...
Comparing Hierarchical Models for Spatiotemporally Misaligned Data using the DIC Criterion
, 1999
"... this paper, we accomplish this comparison using the Deviance Information Criterion (DIC), a recently proposed generalization of the Akaike Information Criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate v ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
this paper, we accomplish this comparison using the Deviance Information Criterion (DIC), a recently proposed generalization of the Akaike Information Criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate variance estimate for DIC, in order to attach significance to apparent differences between models. We illustrate our approach using a spatially misaligned dataset relating a measure of traffic density to pediatric asthma hospitalizations in San Diego County, California.
Fully Model Based Approaches for Spatially Misaligned Data
 Division of Biostatsistics, University of Minnesota
, 1998
"... In this paper we consider inference using multivariate data that are spatially misaligned, i.e., involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. Geographic information systems (GISs) enable the simultaneous display of such data sets, b ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper we consider inference using multivariate data that are spatially misaligned, i.e., involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. Geographic information systems (GISs) enable the simultaneous display of such data sets, but their current capabilities are essentially only descriptive, not inferential. We describe a hierarchical modeling approach which provides a natural solution to this problem through its ability to sensibly combine information from several sources of data and available prior information. Illustrating in the context of counts, allocation under nonnested regional grids is handled using conditionally independent Poissonmultinomial models. Explanatory covariates and multilevel responses are also easily accommodated, with spatial correlation modeled using a conditionally autoregressive (CAR) prior structure. Methods for dealing with missing values in spatial "edge zones" are also discussed. Li...
Bayesian Input Variable Selection Using CrossValidation Predictive Densities and Reversible Jump MCMC
, 2001
"... We consider the problem of input variable selection of a Bayesian model. With suitable priors it is possible to have a large number of input variables in Bayesian models, as less relevant inputs can have a smaller effect in the model. To make the model more explainable and easier to analyse, or to r ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We consider the problem of input variable selection of a Bayesian model. With suitable priors it is possible to have a large number of input variables in Bayesian models, as less relevant inputs can have a smaller effect in the model. To make the model more explainable and easier to analyse, or to reduce the cost of making measurements or the cost of computation, it may be useful to select a smaller set of input variables. Our goal is to find a model with the smallest number of input variables having statistically or practically the same expected utility as the full model. A good estimate for the expected utility, with any desired utility, can be computed using crossvalidation predictive densities (Vehtari and Lampinen, 2001). In the case of input selection, there are 2 K input combinations and computing the crossvalidation predictive densities for each model easily becomes computationally prohibitive. We propose to use the reversible jump Markov chain Monte Carlo (RJMCMC) method to find out potentially useful input combinations, for which the final model choice and assessment is done using the crossvalidation predictive densities. The RJMCMC visits the models according to their posterior probabilities. As models with negligible probability are probably not visited in finite time, the computational savings can be considerable compared to going through all possible models. The posterior probabilities of the models, given by the RJMCMC, are proportional to the product of the prior probabilities of the models and the prior predictive likelihoods of the models. The prior predictive likelihood measures the goodness of the model if no training data were used, and thus can be used to estimate the lower limit of the expected predictive likelihood. These estimates indicate ...
Bayesian Inference for Prevalence in Longitudinal TwoPhase Studies
"... this paper, Bayesian inferences for prevalence are developed using four different probit models for the diagnostic probabilities. The required computations are performed using Gibbs sampling (Gelfand and Smith, 1990). These models are then compared via the Deviance Information Criterion (DIC) recent ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper, Bayesian inferences for prevalence are developed using four different probit models for the diagnostic probabilities. The required computations are performed using Gibbs sampling (Gelfand and Smith, 1990). These models are then compared via the Deviance Information Criterion (DIC) recently introduced by Spiegalhalter et al. (1998).
Strategies for Inference Robustness in Complex Modelling: An Application to Longitudinal Performance Measures.
, 1999
"... Advances in computation mean it is now possible to fit a wide range of complex models, but selecting a model on which to base reported inferences is a difficult problem. Following an early suggestion of Box and Tiao, it seems reasonable to seek `inference robustness' in reported models, so that a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Advances in computation mean it is now possible to fit a wide range of complex models, but selecting a model on which to base reported inferences is a difficult problem. Following an early suggestion of Box and Tiao, it seems reasonable to seek `inference robustness' in reported models, so that alternative assumptions that are reasonably well supported would not lead to substantially different conclusions. We propose a fourstage modelling strategy in which we: iteratively assess and elaborate an initial model, measure the support for each of the resulting family of models, assess the influence of adopting alternative models on the conclusions of primary interest, and identify whether an approximate model can be reported. These stages are semiformal, in that they are embedded in a decisiontheoretic framework but require substantive input for any specific application. The ideas are illustrated on a dataset comprising the success rates of 46 invitro fertilisation clinics over three years. The analysis supports a model that assumes 43 of the 46 clinics have odds on success that are evolving at a constant proportional rate (i.e. linear on a logit scale), while three clinics are outliers in the sense of showing nonlinear trends. For the 43 `linear' clinics, the intercepts and gradients can be assumed to follow a bivariate normal distribution except for one outlying intercept: the odds on success are significantly increasing for four clinics and significantly decreasing for three. This model displays considerable inference robustness and, although its conclusions could be approximated by other lesssupported models, these would not be any more parsimonious. Technical issues include fitting mixture models of alternative hierarchical longitudinal models, t...