Results 11  20
of
40
Actuarial modeling with MCMC and BUGS
 North American Actuarial Journal
, 2001
"... In this paper, the author reviews some aspects of Bayesian data analysis and discusses how a variety of actuarial models can be implemented and analyzed in accordance with the Bayesian paradigm using Markov chain Monte Carlo techniques via the BUGS (Bayesian inference Using Gibbs Sampling) suite of ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper, the author reviews some aspects of Bayesian data analysis and discusses how a variety of actuarial models can be implemented and analyzed in accordance with the Bayesian paradigm using Markov chain Monte Carlo techniques via the BUGS (Bayesian inference Using Gibbs Sampling) suite of software packages. The emphasis is placed on actuarial loss models, but other applications are referenced, and directions are given for obtaining documentation for additional worked examples on the World Wide Web. 1.
On Bayesian model assessment and choice using crossvalidation predictive densities
, 2001
"... We consider the problem of estimating the distribution of the expected utility of the Bayesian model (expected utility is also known as generalization error). We use the crossvalidation predictive densities to compute the expected utilities. We demonstrate that in flexible nonlinear models having ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating the distribution of the expected utility of the Bayesian model (expected utility is also known as generalization error). We use the crossvalidation predictive densities to compute the expected utilities. We demonstrate that in flexible nonlinear models having many parameters, the importance sampling approximated leaveoneout crossvalidation (ISLOOCV) proposed in (Gelfand et al., 1992) may not work. We discuss how the reliability of the importance sampling can be evaluated and in case there is reason to suspect the reliability of the importance sampling, we suggest to use predictive densities from the kfold crossvalidation (kfoldCV). We also note that the kfoldCV has to be used if data points have certain dependencies. As the kfoldCV predictive densities are based on slightly smaller data sets than the full data set, we use a bias correction proposed in (Burman, 1989) when computing the expected utilities. In order to assess the reliability of the estimated expected utilities, we suggest a quick and generic approach based on the Bayesian bootstrap for obtaining samples from the distributions of the expected utilities. Our main goal is to estimate how good (in terms of application field) the predictive ability of the model is, but the distributions of the expected utilities can also be used for comparing different models. With the proposed method, it is easy to compute the probability that one method has better expected utility than some other method. If the predictive likelihood is used as a utility (instead
Fully ModelBased Approaches for Spatially Misaligned Data," unpublished
, 1999
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JS ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of the American Statistical Association.
Comparing Hierarchical Models for Spatiotemporally Misaligned Data using the DIC Criterion
, 1999
"... this paper, we accomplish this comparison using the Deviance Information Criterion (DIC), a recently proposed generalization of the Akaike Information Criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate v ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
this paper, we accomplish this comparison using the Deviance Information Criterion (DIC), a recently proposed generalization of the Akaike Information Criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate variance estimate for DIC, in order to attach significance to apparent differences between models. We illustrate our approach using a spatially misaligned dataset relating a measure of traffic density to pediatric asthma hospitalizations in San Diego County, California.
A taxonomy of latent structure assumptions for probability matrix decomposition models
 Psychometrika
, 2003
"... A taxonomy of latent structure assumptions (LSAs) for probability matrix decomposition (PMD) models is proposed which includes the original PMD model (Maxis, De Boeck, & Van Mechelen, 1996) as well as a threeway extension of the multiple classification latent class model (Marls, 1999). It is s ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
A taxonomy of latent structure assumptions (LSAs) for probability matrix decomposition (PMD) models is proposed which includes the original PMD model (Maxis, De Boeck, & Van Mechelen, 1996) as well as a threeway extension of the multiple classification latent class model (Marls, 1999). It is shown that PMD models involving different LSAs axe actually restricted latent class models with latent variables that depend on some external variables. For parameter stimation a combined approach is proposed that uses both a modefinding algorithm (EM) and a samplingbased approach (Gibbs sampling). A simulation study is conducted to investigate the extent o which information criteria, specific model checks, and checks for global goodness of fit may help to specify the basic assumptions of the different PMD models. Finally, an application is described with models involving different latent structure assumptions for data on hostile behavior in frustrating situations. Key words: discrete data, matrix decomposition, Bayesian analysis, data augmentation, posterior predictive check, psychometrics. PMD models were introduced by Maris, De Boeck, and Van Mechelen (1996) to analyze threeway threemode binary data. The data typically represent associations between two types of elements that are repeatedly observed, for instance, persons who judge whether or not they
Bayesian Neural Networks with Correlating Residuals
 In IJCNN'99: Proceedings of the 1999 International Joint Conference on Neural Networks. IEEE
, 1999
"... Usually in multivariate regression problem it is' assumed that residuals' of outputs' are independent of each other. In many applications a more realistic model would allow dependencies between the outputs'. In this paper we show how a Bayesian treatment using Markov Chain Monte ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Usually in multivariate regression problem it is' assumed that residuals' of outputs' are independent of each other. In many applications a more realistic model would allow dependencies between the outputs'. In this paper we show how a Bayesian treatment using Markov Chain Monte Carlo (MCMC) method can allow for a full covariance matrix with Multi Layer Perceptton (MLP) neural networks'.
Criterionbased methods for Bayesian model assessment
 Statist. Sin
, 2001
"... Abstract: We propose a general Bayesian criterion for model assessment. The criterion is constructed from the posterior predictive distribution of the data, and can be written as a sum of two components, one involving the means of the posterior predictive distribution and the other involving the va ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract: We propose a general Bayesian criterion for model assessment. The criterion is constructed from the posterior predictive distribution of the data, and can be written as a sum of two components, one involving the means of the posterior predictive distribution and the other involving the variances. It can be viewed as a Bayesian goodnessoffit statistic which measures the performance of a model by a combination of how close its predictions are to the observed data and the variability of the predictions. We call this proposed predictive criterion the L measure, it is motivated by earlier work of Ibrahim and Laud (1994) and related to a criterion of Gelfand and Ghosh (1998). We examine the L measure in detail for the class of generalized linear models and survival models with right censored or interval censored data. We also propose a calibration of the L measure, defined as the prior predictive distribution of the difference between the L measures of the candidate model and the criterion minimizing model, and call it the calibration distribution. The calibration distribution will allow us to formally compare two competing models based on their L measure values. We discuss theoretical properties of the calibration distribution in detail, and provide Monte Carlo methods for computing it. For the linear model, we derive an analytic closed form expression for the L measure and the calibration distribution, and also derive a closed form expression for the mean of the calibration distribution. These novel developments will enable us to fully characterize the properties of the L measure for each model under consideration and will facilitate a direct formal comparison between several models, including nonnested models. Informative priors based on historical data and computational techniques are discussed. Several simulated and real datasets are used to demonstrate the proposed methodology. Key words and phrases: Calibration, model selection, predictive criterion, predictive distribution, variable selection.
Spatially Correlated Allocation Models for Count Data
, 2000
"... Spatial heterogeneity of count data on a rare phenomenon occurs commonly in many domains of application, in particularly in disease mapping. We present new methodology to analyse such data, based on a hierarchical allocation model. We assume that the counts follow a Poisson model at the lowest le ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Spatial heterogeneity of count data on a rare phenomenon occurs commonly in many domains of application, in particularly in disease mapping. We present new methodology to analyse such data, based on a hierarchical allocation model. We assume that the counts follow a Poisson model at the lowest level of the hierarchy, and introduce a finite mixture model for the Poisson rates at the next level. The novelty lies in the allocation model to the mixture components, which follows a spatially correlated process, the Potts model, and in treating the number of components of the spatial mixture as unknown. Inference is performed in a Bayesian framework using reversible jump MCMC. The model introduced can be viewed as a Bayesian semiparametric approach to specifying flexible spatial distribution in hierarchical models. It could also be used in contexts where the spatial mixture subgroups are themselves of interest, as in health care monitoring. Performance of the model and comparison wi...
Fully Model Based Approaches for Spatially Misaligned Data
 Division of Biostatsistics, University of Minnesota
, 1998
"... In this paper we consider inference using multivariate data that are spatially misaligned, i.e., involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. Geographic information systems (GISs) enable the simultaneous display of such data sets, b ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper we consider inference using multivariate data that are spatially misaligned, i.e., involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. Geographic information systems (GISs) enable the simultaneous display of such data sets, but their current capabilities are essentially only descriptive, not inferential. We describe a hierarchical modeling approach which provides a natural solution to this problem through its ability to sensibly combine information from several sources of data and available prior information. Illustrating in the context of counts, allocation under nonnested regional grids is handled using conditionally independent Poissonmultinomial models. Explanatory covariates and multilevel responses are also easily accommodated, with spatial correlation modeled using a conditionally autoregressive (CAR) prior structure. Methods for dealing with missing values in spatial "edge zones" are also discussed. Li...