Results 1 
9 of
9
A SimulationIntensive Approach for Checking Hierarchical Models
 TEST
, 1998
"... Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addr ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addresses the entire model specification, model failures can occur at each hierarchical stage. Such failures include outliers, mean structure errors, dispersion misspecification, and inappropriate exchangeabilities. We propose another approach which is entirely simulation based. It only requires the model specification and that, for a given data set, one be able to simulate draws from the posterior under the model. By replicating a posterior of interest using data obtained under the model we can "see" the extent of variability in such a posterior. Then, we can compare the posterior obtained under the observed data with this medley of posterior replicates to ascertain whether the former is in agr...
Nonparametric Bayesian Data Analysis
"... We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant nonparametr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant nonparametric Bayesian models and approaches including Dirichlet process (DP) models and variations, Polya trees, wavelet based models, neural network models, spline regression, CART, dependent DP models, and model validation with DP and Polya tree extensions of parametric models. 1
Parsimonious Estimation of Multiplicative Interaction in Analysis of Variance using KullbackLeibler Information
 Journal of Statistical Planning and Inference
, 1999
"... Many standard methods for modeling interaction in two way ANOVA require mn interaction parameters, where m and n are the number of rows and columns in the table. By viewing the interaction parameters as a matrix and performing a singular value decomposition, one arrives at the Additive Main Effec ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Many standard methods for modeling interaction in two way ANOVA require mn interaction parameters, where m and n are the number of rows and columns in the table. By viewing the interaction parameters as a matrix and performing a singular value decomposition, one arrives at the Additive Main Effects and Multiplicative Interaction (AMMI) model which is commonly used in agriculture. By using only those interaction components with the largest singular values, one can produce an estimate of interaction that requires far fewer than mn parameters while retaining most of the explanatory power of standard methods. The central inference problems of estimating the parameters and determining the number of interaction components has been difficult except in "ideal" situations (equal cell sizes, equal variance, etc.). The Bayesian methodology developed in this paper applies for unequal sample sizes and heteroscedastic data, and may be easily generalized to more complicated data structures...
Bayes Estimate and Inference for Entropy and Information Index of Fit
"... KullbackLeibler information is widely used for developing indices of distributional fit. The most celebrated of such indices is Akaike’s AIC, which is derived as an estimate of the minimum KullbackLeibler information between the unknown datagenerating distribution and a parametric model. In the d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
KullbackLeibler information is widely used for developing indices of distributional fit. The most celebrated of such indices is Akaike’s AIC, which is derived as an estimate of the minimum KullbackLeibler information between the unknown datagenerating distribution and a parametric model. In the derivation of AIC, the entropy of the datagenerating distribution is bypassed because it is free from the parameters. Consequently, the AIC type measures provide criteria for model comparison purposes only, and do not provide information diagnostic about the model fit. A nonparametric estimate of entropy of the datagenerating distribution is needed for assessing the model fit. Several entropy estimates are available and have been used for frequentist inference about information fit indices. A few entropybased fit indices have been suggested for Bayesian inference. This paper develops a class of entropy estimates and provides a procedure for Bayesian inference on the entropy and a fit index. For the continuous case, we define a quantized entropy that approximates and converges to the entropy integral. The quantized entropy includes some well known measures of sample entropy and the existing Bayes entropy estimates as its special cases. For inference about the fit, we use the candidate model as the expected distribution in the Dirichlet process prior and derive the posterior mean of the quantized entropy as the Bayes estimate. The maximum entropy characterization of the candidate model is then used to derive the prior and posterior distributions for the KullbackLeibler information index of fit. The consistency of the proposed Bayes estimates for the entropy and for the information index are shown. As byproducts, the procedure also produces priors and posteriors for the model parameters and the moments.
Strategies for Inference Robustness in Complex Modelling: An Application to Longitudinal Performance Measures.
, 1999
"... Advances in computation mean it is now possible to fit a wide range of complex models, but selecting a model on which to base reported inferences is a difficult problem. Following an early suggestion of Box and Tiao, it seems reasonable to seek `inference robustness' in reported models, so that a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Advances in computation mean it is now possible to fit a wide range of complex models, but selecting a model on which to base reported inferences is a difficult problem. Following an early suggestion of Box and Tiao, it seems reasonable to seek `inference robustness' in reported models, so that alternative assumptions that are reasonably well supported would not lead to substantially different conclusions. We propose a fourstage modelling strategy in which we: iteratively assess and elaborate an initial model, measure the support for each of the resulting family of models, assess the influence of adopting alternative models on the conclusions of primary interest, and identify whether an approximate model can be reported. These stages are semiformal, in that they are embedded in a decisiontheoretic framework but require substantive input for any specific application. The ideas are illustrated on a dataset comprising the success rates of 46 invitro fertilisation clinics over three years. The analysis supports a model that assumes 43 of the 46 clinics have odds on success that are evolving at a constant proportional rate (i.e. linear on a logit scale), while three clinics are outliers in the sense of showing nonlinear trends. For the 43 `linear' clinics, the intercepts and gradients can be assumed to follow a bivariate normal distribution except for one outlying intercept: the odds on success are significantly increasing for four clinics and significantly decreasing for three. This model displays considerable inference robustness and, although its conclusions could be approximated by other lesssupported models, these would not be any more parsimonious. Technical issues include fitting mixture models of alternative hierarchical longitudinal models, t...
Information measures in Perspective
, 2010
"... Informationtheoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting informationtheoretic methodol ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Informationtheoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting informationtheoretic methodologies for a set of problems in probability and statistics. Our focal measures are Shannon entropy and KullbackLeibler information. The background topics for these measures include notions of uncertainty and information, their axiomatic foundation, interpretations, properties, and generalizations. Topics with broad methodological applications include discrepancy between distributions, derivation of probability models, dependence between variables, and Bayesian analysis. More specific methodological topics include model selection, limiting distributions, optimal prior distribution and design of experiment, modeling duration variables, order statistics, data disclosure, and relative importance of predictors. Illustrations range from very basic to highly technical ones that draw attention to subtle points.
Bayesian Assessment of GoodnessofFit against Nonparametric Alternatives
, 2000
"... The classical chisquare test of goodnessoffit compares the hypothesis that data arise from some parametric family of distributions, against the nonparametric alternative that they arise from some other distribution. However, the chisquare test requires continuous data to be grouped into arbitrar ..."
Abstract
 Add to MetaCart
The classical chisquare test of goodnessoffit compares the hypothesis that data arise from some parametric family of distributions, against the nonparametric alternative that they arise from some other distribution. However, the chisquare test requires continuous data to be grouped into arbitrary categories. Furthermore, as the test is based upon an approximation, it can only be used if there is su#cient data. In practice, these requirements are often wasteful of information and overly restrictive. The authors explore the use of the fractional Bayes factor to obtain a Bayesian alternative to the chisquare test when no specific prior information is available. They consider the extent to which their methodology can handle small data sets and continuous data without arbitrary grouping. R ESUM E Le test classique d'ajustement du khideux confronte l'hypothese que les observations proviennent d'une famille parametrique de lois a l'hypothese non parametrique qu'elles sont issues d'un...
Evaluating Fit in Functional Data Analysis Using Model Embeddings
, 2001
"... The author proposes a general method for evaluating the fit of a model for functional data. His approach consists of embedding the proposed model into a larger family of models, assuming the true process generating the data is within the larger family, and then computing a posterior distribution for ..."
Abstract
 Add to MetaCart
The author proposes a general method for evaluating the fit of a model for functional data. His approach consists of embedding the proposed model into a larger family of models, assuming the true process generating the data is within the larger family, and then computing a posterior distribution for the KullbackLeibler distance between the true and the proposed models. The technique is illustrated on biomechanical data reported by Ramsay et al. (1995). It is developed in detail for hierarchical polynomial models such as those found in Lindley & Smith (1972), and is also generally applicable to longitudinal data analysis where polynomials are fit to many individuals. R ESUM E L'auteur propose une methode generale pour juger de l'adequation d'un modele pour donn ees fonctionnelles. Son approche consiste a plonger le modele envisage dans une classe plus vaste de modeles dont un des membres est censegenerer les donnees, puis acalculer une loi a posteriori pour la distance de Kullback...
© Institute of Mathematical Statistics, 2004 Nonparametric Bayesian Data Analysis
"... Abstract. We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant n ..."
Abstract
 Add to MetaCart
Abstract. We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant nonparametric Bayesian models and approaches including Dirichlet process (DP) models and variations, Pólya trees, wavelet based models, neural network models, spline regression, CART, dependent DP models and model validation with DP and Pólya tree extensions of parametric models. Key words and phrases: Dirichlet process, regression, density estimation, survival analysis, Pólya tree, random probability model (RPM).