Results 1  10
of
10
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 221 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Inference and Hierarchical Modeling in the Social Sciences
, 1995
"... this paper I (1) examine three levels of inferential strength supported by typical social science datagathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (2) reconsider the use of HMs in school effective ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
this paper I (1) examine three levels of inferential strength supported by typical social science datagathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (2) reconsider the use of HMs in school effectiveness studies and metaanalysis from the perspective of causal inference; and (3) recommend the increased use of Gibbs sampling and other Markovchain Monte Carlo (MCMC) methods in the application of HMs in the social sciences, so that comparisons between MCMC and betterestablished fitting methodsincluding full or restricted maximum likelihood estimation based on the EM algorithm, Fisher scoring or iterative generalized least squaresmay be more fully informed by empirical practice.
Enhancing the Predictive Performance of Bayesian Graphical Models
 Communications in Statistics – Theory and Methods
, 1995
"... Both knowledgebased systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Baye ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Both knowledgebased systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Bayesian model averaging, a technique for accounting for model uncertainty. Second, we describe a technique for eliciting a prior distribution for competing models from domain experts. We explore the predictive performance of both techniques in the context of a urological diagnostic problem. KEYWORDS: Prediction; Bayesian graphical model; Bayesian network; Decomposable model; Model uncertainty; Elicitation. 1 Introduction Both statistical methods and knowledgebased systems are typically concerned with combining information from various sources to make inferences about prospective measurements. Inevitably, to combine information, we must make modeling assumptions. It follows that we should car...
Bayesian Data Analysis for Data Mining
 In Handbook of Data Mining
, 2002
"... Introduction The Bayesian approach to data analysis computes conditional probability distribu tions of quantities of interest (such as future observables) given the observed data. Bayesian analyses usually begin with a .full probability model  a joint probability dis tribution for all the observ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Introduction The Bayesian approach to data analysis computes conditional probability distribu tions of quantities of interest (such as future observables) given the observed data. Bayesian analyses usually begin with a .full probability model  a joint probability dis tribution for all the observable and unobservable quantities under study  and then use Bayes' theorem (Bayes, 1763) to compute the requisite conditional probability distributions (called poster'Joy distributions). The theorem itself is innocuous enough. In its simplest form, if Q denotes a quantity of interest and D denotes data, the theorem states: P(ql D) P(;lq) X P(q)/P(). This theorem prescribes the basis for statistical learning in the probabilistic frame work. With p(Q) regarded as a probabilistic statement of prior knowledge about Q before obtaining the data D, p(QI D) becomes a revised probabilistic statement of our knowledge about Q in the light of the data (Bernardo and Smith, 1994, p.2). The marginal lik
1.1 Quantification of uncertainty: classical, frequentist, and Bayesian definitions of probability. Subjectivity and objectivity. Case study: Diagnostic screening for HIV 1.2 Sequential learning; Bayes ’ Theorem. Inference (science)
"... and decisionmaking (policy and business). 1.3 Bayesian decision theory; coherence. Maximization of expected utility ..."
Abstract
 Add to MetaCart
and decisionmaking (policy and business). 1.3 Bayesian decision theory; coherence. Maximization of expected utility
Volume I Theory and Methods for Quality Evaluation Preface
"... The Model Quality Report in Business Statistics project was set up to develop a detailed description of the methods for assessing the quality of surveys, with particular application in the context of business surveys, and then to apply these methods in some example surveys to evaluate their quality. ..."
Abstract
 Add to MetaCart
The Model Quality Report in Business Statistics project was set up to develop a detailed description of the methods for assessing the quality of surveys, with particular application in the context of business surveys, and then to apply these methods in some example surveys to evaluate their quality. The work was specified and initiated by Eurostat following on from the Working Group on Quality of Business Statsitics. It was funded by Eurostat under SUPCOM 1997, lot 6, and has been undertaken by a consortium of the UK Office for National Statistics, Statistics Sweden, the University of Southampton and the University of Bath, with the Office for National Statistics managing the contract. The report is divided into four volumes, of which this is the first. This volume deals with the theory and methods for assessing quality in business surveys in nine chapters following the survey process through its various stages in order. These fall into three parts, one dealing with sampling errors, one with a variety of nonsampling errors, and one covering coherence and comparability of statistics. Other volumes of the report contain: • a comparison of the software methods and packages available for variance estimation in sample surveys (volume II); • example assessments of quality for an annual and a monthly business survey from Sweden and the UK (volume III); • guidelines for and experiences of implementing the methods (volume IV). An outline of the chapters in the report is given on the following page. Acknowledgements Apart from the authors, several other people have made large contributions without which this report would not have reached its current form. In particular we would like to mention
Lecture Notes 2: Exchangeability
"... i. definitions To motivate these notes, consider the standard linear model β ε = +i i iy X (2.1) It is common to assume that the errors are normal and independent and identically distributed (n.i.d.) or that the errors are i.i.d. Assumptions of this type do not naturally correspond to the substantiv ..."
Abstract
 Add to MetaCart
i. definitions To motivate these notes, consider the standard linear model β ε = +i i iy X (2.1) It is common to assume that the errors are normal and independent and identically distributed (n.i.d.) or that the errors are i.i.d. Assumptions of this type do not naturally correspond to the substantive social science one brings to an empirical exercise. Similar comments can be made about more general data generating processes than (2.1) of course. I want to argue that the substantive social science knowledge one brings to an analysis is more comfortably associated with the notion of exchangeability. The definition of exchangeability employs the permutation operator ()ρ i which rearranges any set of integers. Definition 2.1. Exchangeability A collection of random variables ε i is exchangeable if for every finite subset of the random variables, ε ε 1
Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Modeling
"... Abstract Any trustworthy probabilistic seismichazard analysis (PSHA) has to account for the intrinsic variability of the system (aleatory variability) and the limited knowledge of the system itself (epistemic uncertainty). The most popular framework for this purpose is the logic tree. Notwithstand ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Any trustworthy probabilistic seismichazard analysis (PSHA) has to account for the intrinsic variability of the system (aleatory variability) and the limited knowledge of the system itself (epistemic uncertainty). The most popular framework for this purpose is the logic tree. Notwithstanding its vast popularity, the logictree outcomes are still interpreted in two different and irreconcilable ways. In one case, practitioners claim that the mean hazard of the logic tree is the hazard and the distribution of all outcomes does not have any probabilistic meaning. On the other hand, other practitioners describe the seismic hazard using the distribution of all logictree outcomes. In this article, we explore in detail the reasons for this controversy regarding the interpretation of logic tree, showing that the distribution of all outcomes is more appropriate to provide a joined, full description of aleatory variability and epistemic uncertainty. Then, we provide a more general framework, that we call ensemble modeling, in which the logictree outcomes can be embedded. In this framework, the logic tree is not a classical probability tree, but it is just a technical tool that samples epistemic uncertainty. Ensemble modeling consists of inferring the parent distribution of the epistemic uncertainty from which this sample is drawn. Ensemble modeling offers some remarkable additional features. First, it allows a rigorous and meaningful validation of any PSHA; this is essential if we want to keep PSHA within the scientific domain. Second, it provides a proper and clear description of the aleatory variability and epistemic uncertainty that can help stakeholders appreciate the whole range of uncertainties in PSHA. Third, it may help to reduce the computational timewhen the logic tree becomes computationally intractable because of too many branches.