Results 11  20
of
115
Variable Selection in Nonparametric Random Effects Models
"... In analyzing longitudinal or clustered data with a mixed effects model (Laird and Ware, 1982), one may be concerned about violations of normality. Such violations can potentially impact subset selection for the fixed and random effects components of the model, inferences on the heterogeneity structu ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In analyzing longitudinal or clustered data with a mixed effects model (Laird and Ware, 1982), one may be concerned about violations of normality. Such violations can potentially impact subset selection for the fixed and random effects components of the model, inferences on the heterogeneity structure, and the accuracy of predictions. This article focuses on Bayesian methods for subset selection in nonparametric random effects models in which one is uncertain about the predictors to be included and the distribution of their random effects. We characterize the unknown distribution of the individualspecific regression coefficients using a weighted sum of Dirichlet process (DP)distributed latent variables. By using carefullychosen mixture priors for coefficients in the base distributions of the component DPs, we allow fixed and random effects to be effectively dropped out of the model. A stochastic search Gibbs sampler is developed for posterior computation, and the methods are illustrated using simulated data and real data from a multilaboratory bioassay study.
Bayes Estimate and Inference for Entropy and Information Index of Fit
"... KullbackLeibler information is widely used for developing indices of distributional fit. The most celebrated of such indices is Akaike’s AIC, which is derived as an estimate of the minimum KullbackLeibler information between the unknown datagenerating distribution and a parametric model. In the d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
KullbackLeibler information is widely used for developing indices of distributional fit. The most celebrated of such indices is Akaike’s AIC, which is derived as an estimate of the minimum KullbackLeibler information between the unknown datagenerating distribution and a parametric model. In the derivation of AIC, the entropy of the datagenerating distribution is bypassed because it is free from the parameters. Consequently, the AIC type measures provide criteria for model comparison purposes only, and do not provide information diagnostic about the model fit. A nonparametric estimate of entropy of the datagenerating distribution is needed for assessing the model fit. Several entropy estimates are available and have been used for frequentist inference about information fit indices. A few entropybased fit indices have been suggested for Bayesian inference. This paper develops a class of entropy estimates and provides a procedure for Bayesian inference on the entropy and a fit index. For the continuous case, we define a quantized entropy that approximates and converges to the entropy integral. The quantized entropy includes some well known measures of sample entropy and the existing Bayes entropy estimates as its special cases. For inference about the fit, we use the candidate model as the expected distribution in the Dirichlet process prior and derive the posterior mean of the quantized entropy as the Bayes estimate. The maximum entropy characterization of the candidate model is then used to derive the prior and posterior distributions for the KullbackLeibler information index of fit. The consistency of the proposed Bayes estimates for the entropy and for the information index are shown. As byproducts, the procedure also produces priors and posteriors for the model parameters and the moments.
Issues in claims reserving and credibility: a semiparametric Approach with Mixed Models
, 2006
"... Verrall (1996) and England & Verrall (2001) first considered the use of smoothing methods in the context of claims reserving. They applied two smoothing procedures in a likelihoodbased way, namely the locally weighted regression smoother (‘loess’) and the cubic smoothing spline smoother. Using the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Verrall (1996) and England & Verrall (2001) first considered the use of smoothing methods in the context of claims reserving. They applied two smoothing procedures in a likelihoodbased way, namely the locally weighted regression smoother (‘loess’) and the cubic smoothing spline smoother. Using the statistical methodology of semiparametric regression and its connection with mixed models (see e.g. Ruppert et al., 2003), this paper revisits smoothing models for loss reserving and credibility. Apart from the flexibility inherent to all semiparametric methods, advantages of the semiparametric approach developed here are threefold. Firstly, a Bayesian implementation of these smoothing models is relatively straightforward and allows simulation from the full predictive distribution of quantities of interest. Since the main interest of actuaries lies in prediction, this is a major advantage. Secondly, because the constructed models have an interpretation as (generalized) linear mixed models ((G)LMMs), standard statistical theory and software for (G)LMMs can be used. Thirdly, more complicated data sets, dealing for example with quarterly development in a reserving context, heavytails, semicontinuous data, or extensive longitudinal data, can be modelled within this framework. Throughout this article, data examples illustrate these different aspects. Several comments are included regarding model specification, estimation and selection.
REALIZABLE PRODUCT LINE DESIGN OPTIMIZATION: COORDINATING MARKETING AND ENGINEERING MODELS VIA ANALYTICAL TARGET CASCADING
"... We present a novel modeling and solution methodology for the product line optimization problem. Our approach formally coordinates performance models from engineering design with consumer preference models from marketing to reach joint solutions that achieve both technical and market feasibility. The ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a novel modeling and solution methodology for the product line optimization problem. Our approach formally coordinates performance models from engineering design with consumer preference models from marketing to reach joint solutions that achieve both technical and market feasibility. The methodology, based on the analytical target cascading (ATC) formulation for hierarchical systems optimization, offers rigorous integration of a number of separate modeling disciplines – conjoint analysis for preference elicitation, a Hierarchical Bayesian (HB) account of consumer heterogeneity, and physicalgeometric product design models – allowing them to intercommunicate effectively. These methods are known to work well in isolation, and their joint convergence properties are assured under ATC. We show how ATC, in concert with HB conjoint, allows efficient gradientbased search of the product characteristic space relative to a posterior profitbased objective function, while ensuring technical feasibility of the product line.
Identifying latent clusters of variability in longitudinal data
 Biostatistics
, 2007
"... SUMMARY: Means or other central tendency measures are by far the most common focus of statistical analyses. However, as Carroll (2003) noted, “systematic dependence of variability on known factors ” may be “fundamental to the proper solution of scientific problems ” in certain settings. We develop a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
SUMMARY: Means or other central tendency measures are by far the most common focus of statistical analyses. However, as Carroll (2003) noted, “systematic dependence of variability on known factors ” may be “fundamental to the proper solution of scientific problems ” in certain settings. We develop a latent cluster model that relates underlying “clusters ” of variability to baseline or outcome measures of interest. Because estimation of variability is inextricably linked to estimation of trend, assumptions about underlying trends are minimized by using nonparametric regression estimates. The resulting residual errors are then clustered into unobserved clusters of variability that are in turn related to subjectlevel predictors of interest. An application is made to psychological affect data. KEY WORDS: Variance function; heteroscedasticity; cubic spline; nonparametric regression; longitudinal profiles.
A Hierarchical Model to Estimate Fish Abundance in Alpine Streams by using Removal Sampling Data from Multiple Locations
, 2013
"... OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible.
Hierarchical Multivariate CAR Models for
 in Bayesian Statistics 7
, 2002
"... Survival models have a long history in the biomedical and biostatistical literature, and are enormously popular in the analysis of timetoevent data. Very often these data will be grouped into strata, such as clinical sites, geographic regions, and so on. Such data will often be available over m ..."
Abstract
 Add to MetaCart
Survival models have a long history in the biomedical and biostatistical literature, and are enormously popular in the analysis of timetoevent data. Very often these data will be grouped into strata, such as clinical sites, geographic regions, and so on. Such data will often be available over multiple time periods, and for multiple diseases. In this paper, we consider hierarchical spatial process models for multivariate survival data sets which are spatiotemporally arranged.
Combining Snow Water Equivalent Data from Multiple Sources to
 Journal of Agricultural, Biological, and Environmental Statistics
, 2002
"... Owing to the importance of snowfall to water supplies in the western United States, government agencies regularly collect data on snow water equivalent (the amount of water in snow) over this region. Several di#erent measurement systems, of possibly di#erent levels of accuracy and reliability, are i ..."
Abstract
 Add to MetaCart
Owing to the importance of snowfall to water supplies in the western United States, government agencies regularly collect data on snow water equivalent (the amount of water in snow) over this region. Several di#erent measurement systems, of possibly di#erent levels of accuracy and reliability, are in operation: snow courses, snow telemetry, aerial markers, and airborne gamma radiation. Data are available at more than 2000 distinct sites, dating back a variable number of years (in a few cases to 1910). Historically, these data have been used primarily to generate flood forecasts and shortterm (intraannual) predictions of streamflow and water supply. However, they also have potential for addressing the possible e#ects of longterm climate change on snowpack accumulations and seasonal water supplies. We present a Bayesian spatiotemporal analysis of the combined snow water equivalent (SWE) data from all four systems that allows for systematic di#erences in accuracy and reliability. The primary objectives of our analysis are (1) to estimate the longterm temporal trend in SWE over the western U.S. and characterize how this trend varies spatially, with quantifiable estimates of variability, and (2) to investigate whether there are systematic di#erences in the accuracy and reliability of the four measurement systems. We find substantial evidence of a decreasing temporal trend in SWE in the Pacific Northwest and northern Rockies, but no evidence of a trend in the intermountain region and southern Rockies. Our analysis also indicates that some of the systems di#er significantly with respect to their accuracy and reliability.
Bayesian Catch Curve Analysis Institute of Statistics Mimeo Series # 2615
"... Catch curves have been used to estimate survival and instantaneous mortality for fish and wildlife populations for many years. In order to better analyze catch curve data from the Apostle Islands population lake trout Salvelinus namaycush in Lake Superior, we develop a Bayesian approach to catch cur ..."
Abstract
 Add to MetaCart
Catch curves have been used to estimate survival and instantaneous mortality for fish and wildlife populations for many years. In order to better analyze catch curve data from the Apostle Islands population lake trout Salvelinus namaycush in Lake Superior, we develop a Bayesian approach to catch curve analysis. First, the proposed Bayesian approach is illustrated for a single catch curve and then extended to multiple years of data. We also relax the model assumption of a stable age distribution to allow random effects across years. The proposed models are compared with the traditional methods using the focused DIC. There are many potential advantages to the Bayesian approach over the traditional methods such as least squares and maximum likelihood, based on large sample theory. Bayesian estimates are valid for finite samples, and efficient numerical methods can be used to obtain estimates of instantaneous mortality. We conclude that many benefits can be obtained from the Bayesian approach to a single catch curve and to multiple years of data, such as closedform variance estimates and the ability to both model and estimate the process variation of survival rates.