Results 11  20
of
36
A WEAKLY INFORMATIVE DEFAULT PRIOR DISTRIBUTION FOR LOGISTIC AND OTHER REGRESSION MODELS
"... We propose a new prior distribution for classical (nonhierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Studentt prior distributions on the coefficients. As a default choice, we reco ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
We propose a new prior distribution for classical (nonhierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Studentt prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longertailed version of the distribution attained by assuming onehalf additional success and onehalf additional failure in a logistic regression. Crossvalidation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small), and also automatically applying more shrinkage to higherorder interactions. This can
Extending Conventional priors for Testing General Hypotheses
 Biometrika
, 2007
"... In this paper, we consider that observations Y come from a general normal linear model and that it is desired to test a simplifying (null) hypothesis about the parameters. We approach this problem from an objective Bayesian, model selection perspective. Crucial ingredients for this approach are ‘pro ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper, we consider that observations Y come from a general normal linear model and that it is desired to test a simplifying (null) hypothesis about the parameters. We approach this problem from an objective Bayesian, model selection perspective. Crucial ingredients for this approach are ‘proper objective priors ’ to be used for deriving the Bayes factors. JeffreysZellnerSiow priors have shown to have good properties for testing null hypotheses defined by specific values of the parameters in full rank linear models. We extend these priors to deal with general hypotheses in general linear models, not necessarily full rank. The resulting priors, which we call ‘conventional priors’, are expressed as a generalization of recently introduced ‘partially informative distributions’. The corresponding Bayes factors are fully automatic, easy to compute and very reasonable. The methodology is illustrated for two popular problems: the change point problem and the equality of treatments effects problem. We compare the conventional priors derived for these problems with other objective Bayesian proposals like the intrinsic priors. It is concluded that both priors behave similarly although interesting subtle differences arise. Finally, we accommodate the conventional priors to deal with non nested model selection as well as multiple model comparison.
Bayesian Selection of LogLinear Models
 Canadian Journal of Statistics
, 1995
"... A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution is studied for two and threeway contingency tables, in which the regression coefficients are interpretable in terms of oddsratios in the table. Efficient and accurate schemes are proposed for calculating the posterior model probabilities. The methods are illustrated for a large number of twoway simulated tables and for two threeway tables. These methods appear to be useful in selecting the best loglinear model and in estimating parameters of interest that reflect uncertainty in the true model. Key words and phrases: Bayes factors, Laplace method, Gibbs sampling, Model selection, Odds ratios. AMS subject classifications: Primary 62H17, 62F15, 62J12. 1 Introduction 1.1 Bayesian testing...
Bayesian Computational Approaches to Model Selection
, 2000
"... this paper was to provide a summary of the stateof theart theory on Bayesian model selection and the application of MCMC algorithms. It has been shown how applications of considerable complexity can be handled successfully within this framework. Several methods for dealing with the use of default, ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
this paper was to provide a summary of the stateof theart theory on Bayesian model selection and the application of MCMC algorithms. It has been shown how applications of considerable complexity can be handled successfully within this framework. Several methods for dealing with the use of default, improper priors in the Bayesian model selection 506 Andrieu, Doucet et al. framework has been shown. Special care has been taken to pinpoint the subtleties of jumping from one parameter space to another, and in general, to show the construction of MCMC samplers in such scenarios. The focus in the paper was on the reversible jump MCMC algorithm as this is the most widely used of all existing methods; it is easy to use, flexible and has nice properties. Many references have been cited, with the emphasis being given to articles with signal processing applications. A Notation
Some Bayesian perspectives on statistical modelling
, 1988
"... I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage
Bayesian Methods for Cumulative, Sequential and Twostep Ordinal Data Regression Models
, 1997
"... This paper considers the fitting, criticism and comparison of three ordinal regression models  the cumulative, sequential and twostep models. Efficient algorithms based on Markov chain Monte Carlo methods are developed for each model. In the case of the cumulative model, a new MetropolisHastings ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper considers the fitting, criticism and comparison of three ordinal regression models  the cumulative, sequential and twostep models. Efficient algorithms based on Markov chain Monte Carlo methods are developed for each model. In the case of the cumulative model, a new MetropolisHastings procedure to sample the cut points is proposed. This procedure relies on a simple transformation of the cutpoints that leaves the transformed cutpoints unordered. For comparing these models, we develop a coherent approach based on marginal likelihoods and Bayes factors. To help in the assignment of prior distributions to regression parameters and the cutpoints, different methods for forming and representing prior beliefs are provided. One set of methods is based on the idea of a training sample and a prior imaginary sample. Another method is based on the direct assessment of distributions on the multinomial response, followed by change of variable to a distribution on the parameters of t...
Heterogeneity and model uncertainty in Bayesian regression models
, 1999
"... Data heterogeneity appears when the sample comes from at least two different populations. We analyze three types of situations. In the first and simplest case the majority of the data come from a central model and a few isolated observations come from a contaminating distribution. The data from the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Data heterogeneity appears when the sample comes from at least two different populations. We analyze three types of situations. In the first and simplest case the majority of the data come from a central model and a few isolated observations come from a contaminating distribution. The data from the contaminating distribution are called outliers and they have been studied in depth in the statistical literature. In the second case we still have a central model but the heterogeneous data may appear in clusters of outliers which mask each other. This is the multiple outlier problem which is much more difficult to handle and it has been analyzed and understood in the last few years. The few Bayesian contributions to this problem are presented. In the third case we do not have a central model but instead different groups of data have been generated by different models. For multivariate normal this problem has been analyzed by mixture models under the name of cluster analysis, but a challenging area of research is to develop a general methodology for applying this multiple model approach to other statistical problems. Heterogeneity implies in general an increase in the uncertainty of predictions, and we present in this paper a procedure to measure this effect.
Bayesian Analysis of Autoregressive Time Series with Change Points
, 2000
"... The paper deals with the identification of a stationary autoregressive model for a time series and the contemporary detection of a change in its mean. We adopt the Bayesian approch with weak prior information about the parameters of the models under comparison and an exact form of the likelihood fun ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The paper deals with the identification of a stationary autoregressive model for a time series and the contemporary detection of a change in its mean. We adopt the Bayesian approch with weak prior information about the parameters of the models under comparison and an exact form of the likelihood function. When necessary, we resort to fractional Bayes factor to choose between models, and to importance sampling to solve computational issues. 1 Introduction The class of autoregressive models is a rather general set of models largely used to represent stationary time series. However, time series often present change points in their dynamic structure, which may have a serious impact on the analysis and lead to misleading conclusions. A change point, which is generally the e#ect of an external event on the phenomenon of interest, may be represented by a change in the structure of the model or simply by a change of the value of some of the parameters. # Address for correspondence: Maria Mad...
models
, 2006
"... default prior distribution for logistic and other regression ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
default prior distribution for logistic and other regression
Temperature Wind
"... ) model (Lewis and Stevens, 1991; Lewis et al., 1994). The modelling is done by letting the predictor variables for the øth value in the time series fy ø g be given by y ø \Gamma1 (= x ø;1 ); y ø \Gamma2 (= x ø;2 ); : : : ; y ø \Gammap (= x ø;p ). Note that if we combined these predictors to form a ..."
Abstract
 Add to MetaCart
) model (Lewis and Stevens, 1991; Lewis et al., 1994). The modelling is done by letting the predictor variables for the øth value in the time series fy ø g be given by y ø \Gamma1 (= x ø;1 ); y ø \Gamma2 (= x ø;2 ); : : : ; y ø \Gammap (= x ø;p ). Note that if we combined these predictors to form a linear additive function we would just be modelling the time series as a usual AR(p) process. However, the ASTAR method involves modelling these lagged predictors variables using a MARS model. Thus the predictor 5.6. MODELLING TIME SERIES USING BAYESIAN MARS 127 variables can have both threshold terms, because of the form of the truncated linear spline basis functions, and interactions