Results 1 
5 of
5
Model selection and accounting for model uncertainty in graphical models using Occam's window
, 1993
"... We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection o ..."
Abstract

Cited by 266 (46 self)
 Add to MetaCart
We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 1011). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty byaveraging overamuch smaller set of models. An efficient search algorithm is developed for nding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
Enhancing the Predictive Performance of Bayesian Graphical Models
 Communications in Statistics – Theory and Methods
, 1995
"... Both knowledgebased systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Baye ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Both knowledgebased systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Bayesian model averaging, a technique for accounting for model uncertainty. Second, we describe a technique for eliciting a prior distribution for competing models from domain experts. We explore the predictive performance of both techniques in the context of a urological diagnostic problem. KEYWORDS: Prediction; Bayesian graphical model; Bayesian network; Decomposable model; Model uncertainty; Elicitation. 1 Introduction Both statistical methods and knowledgebased systems are typically concerned with combining information from various sources to make inferences about prospective measurements. Inevitably, to combine information, we must make modeling assumptions. It follows that we should car...
Simultaneous Variable and Transformation Selection in Linear Regression
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 1995
"... We suggest a method for simultaneous variable and transformation selection based on posterior probabilities. A simultaneous approach avoids the problem that the order in which they are done might influence the choice of variables and transformations. The simultaneous approach also allows for conside ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We suggest a method for simultaneous variable and transformation selection based on posterior probabilities. A simultaneous approach avoids the problem that the order in which they are done might influence the choice of variables and transformations. The simultaneous approach also allows for consideration of all possible models. We use a changepoint model, or "changepoint transformation", which often yields more interpretable models and transformations than the standard BoxTidwell approach. We also address the problem of model uncertainty in the selection of models. By averaging over models, we account for the uncertainty inherent in inference based on a single model chosen from the set of all possible models. We use a Markov chain Monte Carlo model composition (MC³) method which allows us to average over linear regression models when the space of all possible models is very large. This considers the selection of variables and transformations at the same time. In an example, we ...
F. Bacchus, Representing and Reasoning with Probabilistic Knowledge: A
"... Artificial intelligence has its roots in symbolic logic, and for many years it showed little interest in probability. But during the past decade, disinterest has been replaced by engagement. The flowering of expert systems during the 1980s strengthened ties between AI and areas of engineering and bu ..."
Abstract
 Add to MetaCart
Artificial intelligence has its roots in symbolic logic, and for many years it showed little interest in probability. But during the past decade, disinterest has been replaced by engagement. The flowering of expert systems during the 1980s strengthened ties between AI and areas of engineering and business that had long used probability and led to hybrid rulebased and probabilistic expert systems for a plethora of engineering and business problems, including speech recognition, vision, site selection, and process control. At the same time, probabilistic and statistical thinking has penetrated many areas of AI theory, including learning (Vapnik 1983, Valiant 1991), planning (Dean and Wellman 1991), and the evaluation of 1 artificial agents (Cohen 1990), to the point that AI has emerged as a contributor to the theory of probability and statistics. What can the new role for probability in AI teach us about the philosophy of probability? Do the old interpretations of probability do