Results 11  20
of
289
Network Engineering for Complex Belief Networks
 In Proc. UAI
, 1996
"... Developing a large belief network, like any large system, requires systems engineering to manage the design and construction process. We propose that network engineering follow a rapid prototyping approach to network construction. We describe criteria for identifying network modules and the use of ` ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
Developing a large belief network, like any large system, requires systems engineering to manage the design and construction process. We propose that network engineering follow a rapid prototyping approach to network construction. We describe criteria for identifying network modules and the use of `stubs' within a belief network. We propose an object oriented representation for belief networks which captures the semantic as well as representational knowledge embedded in the variables, their values and their parameters. Methods for evaluating complex networks are described. Throughout the discussion, tools which support the engineering of large belief networks are identified. 1. Introduction As belief networks become more popular and well understood as a tool for modeling uncertainty and as the computational power of belief network inference engines increases, belief networks are being applied to problems of increasing size and complexity. In the early 1990's, Pathfinder, at 109 nodes...
Statistical Methods for Eliciting Probability Distributions
 Journal of the American Statistical Association
, 2005
"... Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticia ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticians informed by the fruits of a long line of psychological research into how people represent uncertain information cognitively, and how they respond to questions about that information. In a discussion of the elicitation process, the first issue to address is what it means for an elicitation to be successful, i.e. what criteria should be employed? Our answer is that a successful elicitation faithfully represents the opinion of the person being elicited. It is not necessarily “true ” in some objectivistic sense, and cannot be judged that way. We see elicitation as simply part of the process of statistical modeling. Indeed in a hierarchical model it is ambiguous at which point the likelihood ends and the prior begins. Thus the same kinds of judgment that inform statistical modeling in general also inform elicitation of prior distributions.
Using Simulation to Build Inspection Efficiency Benchmarks for Development Projects
, 1997
"... It is difficult for organizations introducing and using software inspections to evaluate how efficient they are. However, it is of practical importance to determine whether they have been effectively implemented or whether corrective actions are necessary to bring them up to standard. We present in ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
It is difficult for organizations introducing and using software inspections to evaluate how efficient they are. However, it is of practical importance to determine whether they have been effectively implemented or whether corrective actions are necessary to bring them up to standard. We present in this paper a procedure for building inspection efficiency benchmarks based on simulation and typical inspection data. Based on most of the data published in the literature, we build an industrywide benchmark which intends to capture the current practice regarding inspection efficiency. Moreover, we discuss how this benchmark construction procedure can be used to build enterprise specific benchmarks. Last, we assess how robust we can expect them to be in varying conditions by distorting their input distributions.
Building probabilistic networks: where do the numbers come from?  a guide to the literature
 IEEE Transactions on Knowledge and Data Engineering
, 2000
"... Probabilistic networks are now fairly well established as practical representations of knowledge for reasoning under uncertainty, as demonstrated by an increasing number of successful applications in such domains as (medical) diagnosis and prognosis, planning, vision, ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Probabilistic networks are now fairly well established as practical representations of knowledge for reasoning under uncertainty, as demonstrated by an increasing number of successful applications in such domains as (medical) diagnosis and prognosis, planning, vision,
Quantitative analysis of variability and uncertainty in emission estimation: An illustration of methods using mixture distributions. Paper No
 11. Proceedings of the Annual Meeting of the Air & Waste Management Association
, 2001
"... ..."
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
, 2007
"... This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute variou ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Aggregating disparate estimates of chance
, 2004
"... We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We ad ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We address the problem of revising the probability estimates of the panel so as to produce a coherent set that best represents the group’s expertise.
Subjective probability assessment in decision analysis: Partition dependence and bias toward the ignorance prior, Management Science
, 2005
"... doi 10.1287/mnsc.1050.0409 ..."
Assessing Uncertainty in Urban Simulations Using Bayesian Melding
"... We develop a method for assessing uncertainty about quantities of interest using urban simulation models. The method is called Bayesian melding, and extends a previous method developed for macrolevel deterministic simulation models to agentbased stochastic models. It encodes all the available infor ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
We develop a method for assessing uncertainty about quantities of interest using urban simulation models. The method is called Bayesian melding, and extends a previous method developed for macrolevel deterministic simulation models to agentbased stochastic models. It encodes all the available information about model inputs and outputs in terms of prior probability distributions and likelihoods, and uses Bayes’s theorem to obtain the resulting posterior distribution of any quantity of interest that is a function of model inputs and/or outputs. It is Monte Carlo based, and quite easy to implement. We applied it to the projection of future household numbers by traffic activity zone in EugeneSpringfield, Oregon, using the UrbanSim model developed at the University of Washington. We compared it with a simpler method that uses repeated runs of the model with fixed estimated inputs. We found that the simple repeated runs method gave distributions of quantities of interest that were too narrow, while Bayesian melding gave well calibrated uncertainty statements.
When We Dont Know the Costs or the Benefits: Adaptive Strategies for Abating Climate Change
 Change”, Climactic Change
, 1996
"... Most quantitative studies of climatechange policy attempt to predict the greenhousegas reduction plan that will have the optimum balance of longterm costs and benefits. We find that the large uncertainties associated with the climatechange problem can make the policy prescriptions of this traditi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Most quantitative studies of climatechange policy attempt to predict the greenhousegas reduction plan that will have the optimum balance of longterm costs and benefits. We find that the large uncertainties associated with the climatechange problem can make the policy prescriptions of this traditional approach unreliable. In this study, we construct a large uncertainty space that includes the possibility of large and/or abrupt climate changes and/or of technology breakthroughs that radically reduce projected abatement costs. We use computational experiments on a linked system of climate and economic models to compare the performance of a simple adaptive strategy  one that can make midcourse corrections based on observations of the climate and economic systems  and two commonly advocated bestestimate policies based ondifferent expectations about the longterm consequences of climate change. We find that the DoaLittle and EmissionsStabilization bestestimate policies perform well in therespective regions of the uncertainty space where their estimates are valid, but can fail severely in those regions where their estimates are wrong. In contrast, the adaptive strategy can make midcourse corrections and avoid significant errors. While its success is no surprise, the adaptivestrategy approach provides an analytic framework to examine important policy and research issues that will likely arise as society adapts to climate change, and which cannot be easily addressed in studies using bestestimate approaches. 1.