Results 1 
7 of
7
The practical implementation of Bayesian model selection
 Institute of Mathematical Statistics
, 2001
"... In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is r ..."
Abstract

Cited by 84 (3 self)
 Add to MetaCart
In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is relevant for model selection. However, the practical implementation of this approach often requires carefully tailored priors and novel posterior calculation methods. In this article, we illustrate some of the fundamental practical issues that arise for two different model selection problems: the variable selection problem for the linear model and the CART model selection problem.
Diagnostic Measures for Model Criticism
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1996
"... ... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear mo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear models, and variable selection in regression and outlier detection. We illustrate our approach with two applications.
Bayesian Inference on Mixtures of Distributions
, 2008
"... This survey covers stateoftheart Bayesian techniques for the estimation of mixtures. It complements the earlier Marin et al. (2005) by studying new types of distributions, the multinomial, latent class and t distributions. It also exhibits closed form solutions for Bayesian inference in some disc ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This survey covers stateoftheart Bayesian techniques for the estimation of mixtures. It complements the earlier Marin et al. (2005) by studying new types of distributions, the multinomial, latent class and t distributions. It also exhibits closed form solutions for Bayesian inference in some discrete setups. At last, it sheds a new light on the computation of Bayes factors via the approximation of Chib (1995).
Measures of Surprise in Bayesian Analysis
 Duke University
, 1997
"... Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Strict Bayesian analysis calls for an explicit specification of all possible alternatives to H 0 so Bayesians have not made routine use of measures of surprise. In this report we CRITICALLY REVIEw the proposals that have been made in this regard. We propose new modifications, stress the connections with robust Bayesian analysis and discuss the choice of suitable predictive distributions which allow surprise measures to play their intended role in the presence of nuisance parameters. We recommend either the use of appropriate likelihoodratio type measures or else the careful calibration of pvalues so that they are closer to Bayesian answers. Key words and phrases. Bayes factors; Bayesian pvalues; Bayesian robustness; Conditioning; Model checking; Predictive distributions. 1.
Bayesian Variable Selection in Qualitative Models by KullbackLeibler projections
, 1998
"... The Bayesian variable selection method proposed in the paper is based on the evaluation of the KullbackLeibler distance between the full (or encompassing) model and the submodels. The implementation of the method does not require a separate prior modeling on the submodels since the corresponding pa ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Bayesian variable selection method proposed in the paper is based on the evaluation of the KullbackLeibler distance between the full (or encompassing) model and the submodels. The implementation of the method does not require a separate prior modeling on the submodels since the corresponding parameters for the submodels are defined as the KullbackLeibler projections of the full model parameters. The result of the selection procedure is the submodel with the smallest number of covariates which is at an acceptable distance of the full model. We introduce the notion of explanatory power of a model and scale the maximal acceptable distance in terms of the explanatory power of the full model. Moreover, an additivity property between embedded submodels shows that our selection procedure is equivalent to select the submodel with the smallest number of covariates which has a sufficient explanatory power. We illustrate the performances of this method on a breast cancer dataset, where they...
Evaluating Fit in Functional Data Analysis Using Model Embeddings
, 2001
"... The author proposes a general method for evaluating the fit of a model for functional data. His approach consists of embedding the proposed model into a larger family of models, assuming the true process generating the data is within the larger family, and then computing a posterior distribution for ..."
Abstract
 Add to MetaCart
The author proposes a general method for evaluating the fit of a model for functional data. His approach consists of embedding the proposed model into a larger family of models, assuming the true process generating the data is within the larger family, and then computing a posterior distribution for the KullbackLeibler distance between the true and the proposed models. The technique is illustrated on biomechanical data reported by Ramsay et al. (1995). It is developed in detail for hierarchical polynomial models such as those found in Lindley & Smith (1972), and is also generally applicable to longitudinal data analysis where polynomials are fit to many individuals. R ESUM E L'auteur propose une methode generale pour juger de l'adequation d'un modele pour donn ees fonctionnelles. Son approche consiste a plonger le modele envisage dans une classe plus vaste de modeles dont un des membres est censegenerer les donnees, puis acalculer une loi a posteriori pour la distance de Kullback...
The Whetstone and the Alum Block: Balanced Objective Bayesian Comparison of Nested Models for Discrete Data ∗
"... Abstract. When two nested models are compared, using a Bayes factor, from an objective standpoint, two seemingly conflicting issues emerge at the time of choosing parameter priors under the two models. On the one hand, for moderate sample sizes, the evidence in favor of the smaller model can be infl ..."
Abstract
 Add to MetaCart
Abstract. When two nested models are compared, using a Bayes factor, from an objective standpoint, two seemingly conflicting issues emerge at the time of choosing parameter priors under the two models. On the one hand, for moderate sample sizes, the evidence in favor of the smaller model can be inflated by diffuseness of the prior under the larger model. On the other hand, asymptotically, the evidence in favor of the smaller model typically accumulates at a slower rate. With reference to finitely discrete data models, we show that these two issues can be dealt with jointly, by combining intrinsic priors and nonlocal priors in a new unified class of priors. We illustrate our ideas in a running Bernoulli example, then we apply them to test the equality of two proportions, and finally we deal with the more general case of logistic regression models.