Results 1 
6 of
6
Accounting for Model Uncertainty in Survival Analysis Improves Predictive Performance
 In Bayesian Statistics 5
, 1995
"... Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the modelbuilding process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significanc ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the modelbuilding process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful. 1 Introduction From 1974 to 1984 the Mayo Clinic conducted a doubleblinded randomized clinical trial involving 312 patients to compare the drug DPCA with a placebo in the treatment of primary biliary cirrhosis...
Variable selection and Bayesian model averaging in casecontrol studies
, 1998
"... Covariate and confounder selection in casecontrol studies is most commonly carried out using either a twostep method or a stepwise variable selection method in logistic regression. Inference is then carried out conditionally on the selected model, but this ignores the model uncertainty implicit in ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Covariate and confounder selection in casecontrol studies is most commonly carried out using either a twostep method or a stepwise variable selection method in logistic regression. Inference is then carried out conditionally on the selected model, but this ignores the model uncertainty implicit in the variable selection process, and so underestimates uncertainty about relative risks. We report on a simulation study designed to be similar to actual casecontrol studies. This shows that pvalues computed after variable selection can greatly overstate the strength of conclusions. For example, for our simulated casecontrol studies with 1,000 subjects, of variables declared to be "significant" with pvalues between.01 and.05, only 49 % actually were risk factors when stepwise variable selection was used. We propose Bayesian model averaging as a formal way of taking account of model uncertainty in casecontrol studies. This yields an easily interpreted summary, the posterior probability that a variable is a risk factor, and our simulation study indicates this to be reasonably well calibrated in the situations simulated. The methods are applied and compared
Bayesian structure learning using dynamic programming and MCMC
 In UAI, 2007b
"... We show how to significantly speed up MCMC sampling of DAG structures by using a powerful nonlocal proposal based on Koivisto’s dynamic programming (DP) algorithm (11; 10), which computes the exact marginal posterior edge probabilities by analytically summing over orders. Furthermore, we show how s ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We show how to significantly speed up MCMC sampling of DAG structures by using a powerful nonlocal proposal based on Koivisto’s dynamic programming (DP) algorithm (11; 10), which computes the exact marginal posterior edge probabilities by analytically summing over orders. Furthermore, we show how sampling in DAG space can avoid subtle biases that are introduced by approaches that work only with orders, such as Koivisto’s DP algorithm and MCMC order samplers (6; 5). 1
LongRun Performance of Bayesian Model Averaging
 Journal of the American Statistical Association
, 2003
"... Hjort and Claeskens (HC) argue that statistical inference conditional on a single selected model underestimates uncertainty, and that model averaging is the way to remedy this; we strongly agree. They point out that Bayesian model averaging (BMA) has been the dominant approach to this, but argue tha ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Hjort and Claeskens (HC) argue that statistical inference conditional on a single selected model underestimates uncertainty, and that model averaging is the way to remedy this; we strongly agree. They point out that Bayesian model averaging (BMA) has been the dominant approach to this, but argue that its performance has been inadequately studied, and propose an alternative, Frequentist Model Averaging (FMA). We point out, however, that there is a substantial literature on the performance of BMA, consisting of three main threads: general theoretical results, simulation studies, and evaluation of outofsample performance. The theoretical results are scattered, and we summarize them. The results have been quite consistent: BMA has tended to outperform competing methods for model selection and taking account of model uncertainty. The theoretical results depend on the assumption that the \practical distribution" over which the performance of methods is assessed is the same as the prior distribution used, and we investigate sensitivity of results to this assumption in a simple normal example; they turn out not to be unduly sensitive.
Bayes Factors and BIC: Comment on Weakliem
, 1998
"... Weakliem agrees that Bayes factors are useful for model selection and hypothesis testing. He reminds us that the simple and convenient BIC approximation corresponds most closely to one particular prior on the parameter space, the unit information prior, and points out that researchers may have diffe ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Weakliem agrees that Bayes factors are useful for model selection and hypothesis testing. He reminds us that the simple and convenient BIC approximation corresponds most closely to one particular prior on the parameter space, the unit information prior, and points out that researchers may have different prior information or opinions. Clearly a prior that represents the available information should be used, although the unit information prior often seems reasonable in the absence of strong prior information. It seems that, among the Bayes factors likely to be used in practice, BIC is conservative in the sense of tending to provide less evidence for additional parameters or "effects". Thus if a Bayes factor based on additional prior information favors an effect, but BIC does not, the prior information is playing a crucial role and this should be made clear when the research is reported. BIC may well have a role as a baseline reference analysis to be provided in routine reporting of research results, perhaps along with Bayes factors based on other priors. In Weakliem's 2 x 2 table examples, BIC and Bayes factors based on Weakliem's preferred priors lead to similar substantive conclusions, but both differ from those based on P values. When there is additional prior information, the technology now exists to express it as
Classification using Bayesian Neural Nets
 IEEE International Conference on Neural Networks
, 1996
"... Recently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to neural network ..."
Abstract
 Add to MetaCart
Recently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to neural networks as suggested in the literature applied to classification problems is not easy. In fact we are not aware of applications of the full approach to real world classification problems.