Results 1 
8 of
8
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 311 (15 self)
 Add to MetaCart
(Show Context)
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
Benchmark Priors for Bayesian Model Averaging
 FORTHCOMING IN THE JOURNAL OF ECONOMETRICS
, 2001
"... In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, “diffuse” priors on modelspecific parameters can lead to quite unexpected consequ ..."
Abstract

Cited by 171 (5 self)
 Add to MetaCart
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, “diffuse” priors on modelspecific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an “automatic” or “benchmark” prior structure that can be used in such cases. We focus on the Normal linear regression model with uncertainty in the choice of regressors. We propose a partly noninformative prior structure related to a Natural Conjugate gprior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (1995), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a “benchmark” prior specification in a linear regression context with model uncertainty.
Bayesian indoor positioning systems
 In Infocom
, 2005
"... Abstract — In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model ach ..."
Abstract

Cited by 99 (15 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation. Index Terms — Experimentation with real networks/Testbed, Statistics, WLAN, localization,
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
A method for simultaneous variable selection and outlier identification in linear regression
 COMPUTATIONAL STATISTICS & DATA ANALYSIS
, 1996
"... ..."
Bayesian Predictive Simultaneous Variable and Transformation Selection in the Linear Model
"... this paper, we propose two variable and transformation selection procedures on the predictor variables in the linear model. The first procedure is a simultaneous variable and transformation selection procedure. For data sets with many predictors, a stepwise variable selection procedure is also prese ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper, we propose two variable and transformation selection procedures on the predictor variables in the linear model. The first procedure is a simultaneous variable and transformation selection procedure. For data sets with many predictors, a stepwise variable selection procedure is also presented. The procedures are based on Bayesian model selection criteria introduced by Ibrahim and Laud (1994) and Laud and Ibrahim (1995). Several examples are given to illustrate the methodology.
PowerExpectedPosterior Priors for Variable Selection in Gaussian Linear Models
, 2012
"... Summary: Imaginary training samples are often used in Bayesian statistics to develop prior distributions, with appealing interpretations, for use in model comparison. Expectedposterior priors are defined via imaginary training samples coming from a common underlying predictive distribution m ∗ , us ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Summary: Imaginary training samples are often used in Bayesian statistics to develop prior distributions, with appealing interpretations, for use in model comparison. Expectedposterior priors are defined via imaginary training samples coming from a common underlying predictive distribution m ∗ , using an initial baseline prior distribution. These priors can have subjective and also default Bayesian implementations, based on different choices of m ∗ and of the baseline prior. One of the main advantages of the expectedposterior priors is that impropriety of baseline priors causes no indeterminacy of Bayes factors; but at the same time they strongly depend on the selection and the size of the training sample. Here we combine ideas from the powerprior and the unitinformation prior methodologies to greatly diminish the effect of training samples on a Bayesian variableselection problem using the expectedposterior prior approach: we raise the likelihood involved in the expectedposterior prior distribution to a power that produces a prior information content equivalent to one data point. The result is that in practice our powerexpectedposterior (PEP) methodology is sufficiently insensitive to the size n ∗ of the training sample that one may take n ∗ equal to the fulldata sample size and dispense with training samples altogether; this promotes stability of the resulting Bayes factors, removes the arbitrariness arising from individual