Results 1 
4 of
4
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
, 2007
"... This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute variou ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Sensitivity in risk analyses with uncertain numbers
, 2006
"... Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is DempsterShafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a “pinching ” strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered. 3
Interval Computations as an Important Part of Granular Computing: An Introduction
"... This chapter provides a general introduction to interval computations, especially to interval computations as an important part of granular computing. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This chapter provides a general introduction to interval computations, especially to interval computations as an important part of granular computing.
Projecting uncertainty through black boxes
, 2008
"... Computational models whose internal details are not accessible to the analyst are called black boxes. They arise because of security restrictions or because of the loss of the source code for legacy software programs. Computational models whose internal details are extremely complex are also sometim ..."
Abstract
 Add to MetaCart
Computational models whose internal details are not accessible to the analyst are called black boxes. They arise because of security restrictions or because of the loss of the source code for legacy software programs. Computational models whose internal details are extremely complex are also sometimes treated as black boxes. It is often important to assess the uncertainty that should be ascribed to the output from a black box owing to uncertainty about its input quantities, their statistical distributions, or interdependencies. Sensitivity or ‘whatif ’ studies are commonly used for this purpose. In such studies, the space of possible inputs is sampled as a vector of real values which is then provided to the black box to compute the output(s) that corresponds to those inputs. Such studies are often cumbersome to implement and understand, and they generally require many samples, depending on the complexity of the model and the dimensionality of the inputs. This report reviews methods that can be used to propagate about inputs through black boxes, especially ‘hard ’ black boxes whose computational complexity restricts the total number of samples that can be evaluated. The focus is on methods that estimate the uncertainty of the outputs from the outside inward. That is, we are interested in methods that produce conservative characterizations of uncertainty that become tighter and tighter as the total computational effort increases.