Results 1 
7 of
7
Accounting for Model Uncertainty in Survival Analysis Improves Predictive Performance
 In Bayesian Statistics 5
, 1995
"... Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the modelbuilding process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significanc ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the modelbuilding process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful. 1 Introduction From 1974 to 1984 the Mayo Clinic conducted a doubleblinded randomized clinical trial involving 312 patients to compare the drug DPCA with a placebo in the treatment of primary biliary cirrhosis...
Enhancing the Predictive Performance of Bayesian Graphical Models
 Communications in Statistics – Theory and Methods
, 1995
"... Both knowledgebased systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Baye ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Both knowledgebased systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Bayesian model averaging, a technique for accounting for model uncertainty. Second, we describe a technique for eliciting a prior distribution for competing models from domain experts. We explore the predictive performance of both techniques in the context of a urological diagnostic problem. KEYWORDS: Prediction; Bayesian graphical model; Bayesian network; Decomposable model; Model uncertainty; Elicitation. 1 Introduction Both statistical methods and knowledgebased systems are typically concerned with combining information from various sources to make inferences about prospective measurements. Inevitably, to combine information, we must make modeling assumptions. It follows that we should car...
Cognitive Factors Affecting Subjective Probability Assessment
, 1994
"... This article will consider Hogarth's 1975 assessment that "man is a selective, sequential information processing system with limited capacity, . . . illsuited for assessing probability distributions." Particular attention will be paid to when people make normatively "good" or "poor" probability ass ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This article will consider Hogarth's 1975 assessment that "man is a selective, sequential information processing system with limited capacity, . . . illsuited for assessing probability distributions." Particular attention will be paid to when people make normatively "good" or "poor" probability assessments, what techniques are effective in eliciting "good," coherent probability assessments, and on how these ideas are relevant to the practicing Bayesian statistician. While there are situations where experts can make wellcalibrated judgments, it will be argued that more research needs to be done into the effects of expertise, training, and feedback.
The Elicitation of Probabilities A Review of the Statistical Literature
, 2005
"... “We live in an uncertain world, and probability risk assessment deals as directly with that fact as anything we do. Uncertainty arises partly because we are fallible. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
“We live in an uncertain world, and probability risk assessment deals as directly with that fact as anything we do. Uncertainty arises partly because we are fallible.
Default Estimation and Expert Information
, 2008
"... Default is a rare event, even in segments in the midrange of a bank’s portfolio. Inference about default rates is essential for risk management and for compliance with the requirements of Basel II. Most commercial loans are in the middlerisk categories and are to unrated companies. Expert informati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Default is a rare event, even in segments in the midrange of a bank’s portfolio. Inference about default rates is essential for risk management and for compliance with the requirements of Basel II. Most commercial loans are in the middlerisk categories and are to unrated companies. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using a prior distribution assessed from an industry expert. The binomial model, most common in applications, is extended to allow correlated defaults. A check of robustness is illustrated with an ɛ − mixture of priors.
Default Estimation, Correlated Defaults, and Expert Information
, 2008
"... The statements made and views expressed herein are solely those of the author, and do not necessarily represent official policies, statements or views of the Office of the Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segmen ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The statements made and views expressed herein are solely those of the author, and do not necessarily represent official policies, statements or views of the Office of the Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segments of a bank’s portfolio and data information can be minimal. Inference about default rates is essential for efficient capital allocation, for risk management and for compliance with the requirements of the Basel II rules on capital standards for banks. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using prior distributions assessed from industry experts. A maximum entropy approach is used to represent expert information. The binomial model, most common in applications, is extended to allow correlated defaults yet remain consistent with Basel II. The application shows that probabilistic information can be elicited from experts and econometric methods can be useful even when data information is sparse.
A BAYESIAN APPROACH TO ANALYSIS OF LIMIT STANDARDS
"... Limit standards are probabilistic requirements or benchmarks regarding the proportion of replications conforming or not conforming to a desired threshold. Sample proportions resulting from the analysis of replications are known to be beta distributed. As a result, standard constructs for defining a ..."
Abstract
 Add to MetaCart
Limit standards are probabilistic requirements or benchmarks regarding the proportion of replications conforming or not conforming to a desired threshold. Sample proportions resulting from the analysis of replications are known to be beta distributed. As a result, standard constructs for defining a confidence interval on such a proportion, based on critical points from the normal or Student’s t distribution, are increasingly inaccurate as the mean sample proportion approaches the limits of 0 or 1. We consider the Bayesian relationship between the beta and binomial distributions as the foundation for a sequential methodology in the analysis of limit standards. The benefits of using the beta distribution methodology are variance reduction, and smaller sample size (when compared to other analysis methodologies). 1