Results 1 
6 of
6
Objective Priors for Model Selection in OneWay Random Effects Models.” Submitted to The Canadian journal of Statistics
, 2005
"... It is broadly accepted that the Bayes factor is a key tool in model selection. Nevertheless, it is an important, difficult and still open question which priors should be used to develop objective (or default) Bayes factors. We consider this problem in the context of the oneway random effects model. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
It is broadly accepted that the Bayes factor is a key tool in model selection. Nevertheless, it is an important, difficult and still open question which priors should be used to develop objective (or default) Bayes factors. We consider this problem in the context of the oneway random effects model. Arguments based on concepts like orthogonality, matching predictive, and invariance are used to justify a specific form of the priors, in which the (proper) prior for the new parameter (using Jeffreys ’ terminology) has to be determined. Two different proposals for this proper prior have been derived: the intrinsic priors and the divergence based priors, a recently proposed methodology. It will be seen that the divergence based priors produce consistent Bayes factors. The methods are illustrated on examples and compared with other proposals. Finally, the divergence based priors and the associated Bayes factor are derived for the unbalanced case.
Generalization of Jeffreys’ Divergence Based Priors for Bayesian Hypothesis testing
, 2008
"... In this paper we introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) consistency; often, they are similar to other existing proposals like the intrinsic priors; moreover, in normal linear models scenarios, they exactly reproduce JeffreysZellnerSiow priors. Most importantly, in challenging scenarios such as irregular models and mixture models, the DB priors are well defined and very reasonable, while alternative proposals are not. We derive approximations to the DB priors as well as MCMC and asymptotic expressions for the associated Bayes factors.
Bayesian Analysis of Threshold Autoregressive Models
, 2003
"... First of all, I would like to thank my major professor, Halima Bensmail, for her guidance and support throughout this research. Also, I would like to thank the members of my committee, Hamprasum Bozdogan, George Philippatos, and John Barkoulas, for their patience and support. I am also grateful to t ..."
Abstract
 Add to MetaCart
(Show Context)
First of all, I would like to thank my major professor, Halima Bensmail, for her guidance and support throughout this research. Also, I would like to thank the members of my committee, Hamprasum Bozdogan, George Philippatos, and John Barkoulas, for their patience and support. I am also grateful to the faculties and colleagues of statistics department who have given me insights into the new world of statistics. Taking this opportunity, I would thank my parents, family, and friends. Without their suggestions and encouragements, this work would not be possible. ii Threshold Autoregression is a powerful statistical tool for modeling structural nonlinear relationships. This study presents a Bayesian modeling procedure for threshold autoregressions. To this end, the analytical framework of Bayesian analysis for a univariate SETAR and a threshold VAR were developed. For the estimation of parameters, a MarkovChain Monte Carlo (MCMC) simulation and an importance/rejection sampling are
Divergence Based Priors for Bayesian Hypothesis testing
, 2006
"... Maybe the main difficulty for objective Bayesian hypothesis testing (and model selection in general), is that usual objective improper priors can not be used for parameters not occurring in all of the models. In this paper we introduce (objective) proper prior distributions for hypothesis testing an ..."
Abstract
 Add to MetaCart
Maybe the main difficulty for objective Bayesian hypothesis testing (and model selection in general), is that usual objective improper priors can not be used for parameters not occurring in all of the models. In this paper we introduce (objective) proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) consistency; often, they are similar to other existing proposals like the intrinsic priors; moreover, in normal linear models scenarios, they exactly reproduce JeffreysZellnerSiow priors. Most importantly, in challenging scenarios such as irregular models and mixture models, the DB priors are well defined and very reasonable, while alternative proposals are not. We derive approximations to the DB priors as well as MCMC and asymptotic expressions for the associated Bayes factors, which also reveals interesting connections with other proposals (like the unit information priors).