Results 1  10
of
14
Diagnostic Measures for Model Criticism
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1996
"... ... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear mo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear models, and variable selection in regression and outlier detection. We illustrate our approach with two applications.
Strategies for Inference Robustness in Complex Modelling: An Application to Longitudinal Performance Measures.
, 1999
"... Advances in computation mean it is now possible to fit a wide range of complex models, but selecting a model on which to base reported inferences is a difficult problem. Following an early suggestion of Box and Tiao, it seems reasonable to seek `inference robustness' in reported models, so that a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Advances in computation mean it is now possible to fit a wide range of complex models, but selecting a model on which to base reported inferences is a difficult problem. Following an early suggestion of Box and Tiao, it seems reasonable to seek `inference robustness' in reported models, so that alternative assumptions that are reasonably well supported would not lead to substantially different conclusions. We propose a fourstage modelling strategy in which we: iteratively assess and elaborate an initial model, measure the support for each of the resulting family of models, assess the influence of adopting alternative models on the conclusions of primary interest, and identify whether an approximate model can be reported. These stages are semiformal, in that they are embedded in a decisiontheoretic framework but require substantive input for any specific application. The ideas are illustrated on a dataset comprising the success rates of 46 invitro fertilisation clinics over three years. The analysis supports a model that assumes 43 of the 46 clinics have odds on success that are evolving at a constant proportional rate (i.e. linear on a logit scale), while three clinics are outliers in the sense of showing nonlinear trends. For the 43 `linear' clinics, the intercepts and gradients can be assumed to follow a bivariate normal distribution except for one outlying intercept: the odds on success are significantly increasing for four clinics and significantly decreasing for three. This model displays considerable inference robustness and, although its conclusions could be approximated by other lesssupported models, these would not be any more parsimonious. Technical issues include fitting mixture models of alternative hierarchical longitudinal models, t...
Bayesian Hypothesis Testing in Latent Variable Models
, 2010
"... Hypothesis testing using Bayes factors (BFs) is known to suffer from several problems in the context of latent variable models. The first problem is computational. Another problem is that BFs are not well defined under the improper prior. In this paper, a new Bayesian method, based on decision theo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Hypothesis testing using Bayes factors (BFs) is known to suffer from several problems in the context of latent variable models. The first problem is computational. Another problem is that BFs are not well defined under the improper prior. In this paper, a new Bayesian method, based on decision theory and the EM algorithm, is introduced to test a point hypothesis in latent variable models. The new statistic is a byproduct of the Bayesian MCMC output and, hence, easy to compute. It is shown that the new statistic is appropriately defined under improper priors because the method employs a continuous loss function. The finite sample properties are examined using simulated data. The method is also illustrated in the context of a onefactor asset pricing model and a stochastic volatility model with jumps using real data.
Information measures in Perspective
, 2010
"... Informationtheoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting informationtheoretic methodol ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Informationtheoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting informationtheoretic methodologies for a set of problems in probability and statistics. Our focal measures are Shannon entropy and KullbackLeibler information. The background topics for these measures include notions of uncertainty and information, their axiomatic foundation, interpretations, properties, and generalizations. Topics with broad methodological applications include discrepancy between distributions, derivation of probability models, dependence between variables, and Bayesian analysis. More specific methodological topics include model selection, limiting distributions, optimal prior distribution and design of experiment, modeling duration variables, order statistics, data disclosure, and relative importance of predictors. Illustrations range from very basic to highly technical ones that draw attention to subtle points.
Evaluating Fit in Functional Data Analysis Using Model Embeddings
, 2001
"... The author proposes a general method for evaluating the fit of a model for functional data. His approach consists of embedding the proposed model into a larger family of models, assuming the true process generating the data is within the larger family, and then computing a posterior distribution for ..."
Abstract
 Add to MetaCart
The author proposes a general method for evaluating the fit of a model for functional data. His approach consists of embedding the proposed model into a larger family of models, assuming the true process generating the data is within the larger family, and then computing a posterior distribution for the KullbackLeibler distance between the true and the proposed models. The technique is illustrated on biomechanical data reported by Ramsay et al. (1995). It is developed in detail for hierarchical polynomial models such as those found in Lindley & Smith (1972), and is also generally applicable to longitudinal data analysis where polynomials are fit to many individuals. R ESUM E L'auteur propose une methode generale pour juger de l'adequation d'un modele pour donn ees fonctionnelles. Son approche consiste a plonger le modele envisage dans une classe plus vaste de modeles dont un des membres est censegenerer les donnees, puis acalculer une loi a posteriori pour la distance de Kullback...
Measuring Prior Sensitivity and Prior Informativeness
, 2010
"... The paper derives measures of prior sensitivity and prior informativeness for posterior results in large Bayesian models that account for the high dimensional interaction between prior and likelihood information. The basis for both measures is the derivative matrix of the posterior mean with respect ..."
Abstract
 Add to MetaCart
The paper derives measures of prior sensitivity and prior informativeness for posterior results in large Bayesian models that account for the high dimensional interaction between prior and likelihood information. The basis for both measures is the derivative matrix of the posterior mean with respect to the prior mean, which is easily obtained from Markov Chain Monte Carlo output. An application to Smets and Wouters ’ (2007) dynamic stochastic general equilibrium model shows that for many structural parameters, the prior is very informative, and posterior means are quite sensitive to changes in prior means. In contrast, the prior plays a much less important role for key impulse responses and variance decompositions.
Computer Based Statistical Treatment in Models with Incidental Parameters Inspired by Car Crash Data
"... in recent years. We study computer intensive methods that can be used in complex situations where it is not possible to express the likelihood estimates or the posterior analytically. The work is inspired by a set of car crash data from real traffic. We formulate and develop a model for car crash da ..."
Abstract
 Add to MetaCart
in recent years. We study computer intensive methods that can be used in complex situations where it is not possible to express the likelihood estimates or the posterior analytically. The work is inspired by a set of car crash data from real traffic. We formulate and develop a model for car crash data that aims to estimate and compare the relative collision safety among different car models. This model works sufficiently well, although complications arise due to a growing vector of incidental parameters. The bootstrap is shown to be a useful tool for studying uncertainties of the estimates of the structural parameters. This model is further extended to include driver characteristics. In a Poisson model with similar, but simpler structure, estimates of the structural parameter in the presence of incidental parameters are studied. The profile likelihood, bootstrap and the delta method are compared for deterministic and random incidental parameters. The same asymptotic properties, up to first order, are seen for deterministic as well as random
Relative Distributional Methods by
, 1997
"... Relative distribution methods are a non–parametric statistical framework for analyzing data in a fully distributional context. The methods combine the graphical tools of exploratory data analysis with a framework for statistical decomposition and inference. The relative distribution is similar to a ..."
Abstract
 Add to MetaCart
Relative distribution methods are a non–parametric statistical framework for analyzing data in a fully distributional context. The methods combine the graphical tools of exploratory data analysis with a framework for statistical decomposition and inference. The relative distribution is similar to a density ratio, and is based on the direct comparison of one distribution to another. It is technically defined as the random variable obtained by transforming a variable from a comparison group by the cumulative distribution function (CDF) of that variable for a reference group. This transformation produces a set of observations, the relative data, that represent the rank of the original comparison value in terms of the reference group’s CDF. The relative data preserve the information needed to compare the two original distributions. The density and CDF of the relative data can therefore be used to fully represent and analyze distributional differences. Analysis can move beyond comparisons of means and variances to fully tap the information inherent in distributions. The analytic framework is general and flexible, as the relative density is decomposable into location, shape and covariate effects.
Understanding predictive information criteria for Bayesian models ∗
, 2013
"... We review the Akaike, deviance, and WatanabeAkaike information criteria from a Bayesian perspective, where the goal is to estimate expected outofsampleprediction error using a biascorrectedadjustmentofwithinsampleerror. Wefocusonthechoicesinvolvedinsettingupthese measures, and we compare them i ..."
Abstract
 Add to MetaCart
We review the Akaike, deviance, and WatanabeAkaike information criteria from a Bayesian perspective, where the goal is to estimate expected outofsampleprediction error using a biascorrectedadjustmentofwithinsampleerror. Wefocusonthechoicesinvolvedinsettingupthese measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this paper is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice.
License GPL
, 2013
"... Title A package for estimating agespecific survival from incomplete capturerecapture/recovery data ..."
Abstract
 Add to MetaCart
Title A package for estimating agespecific survival from incomplete capturerecapture/recovery data