Results 1  10
of
59
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
 11733, SAND20070939. hal00839639, version 1  28 Jun 2013
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, ..."
Abstract

Cited by 39 (20 self)
 Add to MetaCart
Sandia is a multiprogram laboratory operated by Sandia Corporation,
A New CauchyBased BlackBox Technique for Uncertainty in Risk Analysis
 in Risk Analysis, Reliability Engineering and Systems Safety
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract

Cited by 36 (17 self)
 Add to MetaCart
(Show Context)
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new blackbox technique for processing this type of uncertainty.
Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "BlackBox" Programs
 Handbook on Randomized Computing, Kluwer, 2001
, 2000
"... In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
(Show Context)
In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
Astrogeometry, Error Estimation, and Other Applications of SetValued Analysis
 ACM SIGNUM Newsletter
, 1996
"... In many reallife application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets: ffl In image processing (e.g., in astronomy), the desired blackand ..."
Abstract

Cited by 29 (27 self)
 Add to MetaCart
(Show Context)
In many reallife application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets: ffl In image processing (e.g., in astronomy), the desired blackandwhite image is, from the mathematical viewpoint, a set.
Outlier Detection Under Interval Uncertainty: Algorithmic Solvability and Computational Complexity
 LargeScale Scientific Computing, Proceedings of the 4th International Conference LSSC’2003, Sozopol, Bulgaria, June 4–8, 2003, Springer Lecture Notes in Computer Science
"... In many application areas, it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some “normal ” values x1,..., xn, compute the sample average E, the sample standard variation σ, and then mark a value x as an outlier if x is outside the k0 ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
(Show Context)
In many application areas, it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some “normal ” values x1,..., xn, compute the sample average E, the sample standard variation σ, and then mark a value x as an outlier if x is outside the k0sigma interval [E − k0 · σ,E + k0 · σ] (for some preselected parameter k0). In real life, we often have only interval ranges [xi, xi] for the normal values x1,..., xn. In this case, we only have intervals of
Population Variance under Interval Uncertainty: A
 New Algorithm, Reliable Computing
, 2006
"... In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance V = 1 n · n∑ (xi − E) i=1 ..."
Abstract

Cited by 19 (17 self)
 Add to MetaCart
(Show Context)
In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance V = 1 n · n∑ (xi − E) i=1
Exact bounds on finite populations of interval data
 Reliable Computing
, 2001
"... In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the fin ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
(Show Context)
In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the finite population variance function of interval data. We prove that the problem of computing the upper bound σ 2 is, in general, NPhard. We provide a feasible algorithm that computes σ 2 under reasonable easily verifiable conditions, and provide preliminary results on computing other functions of finite populations. 1
Computing Population Variance and Entropy under Interval Uncertainty: LinearTime Algorithms
, 2006
"... In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the pr ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the problem of computing V is, in general, NPhard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm ” (Reliable Computing, 2006, Vol. 12, No. 4, pp. 273–280) we showed that in
IntervalValued and FuzzyValued Random Variables: From Computing Sample Variances to Computing Sample Covariances
 Soft Methodology and Random Information Systems, SpringerVerlag, 2004
"... Summary. Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [˜xi − ∆i, ˜xi + ∆i], where ˜xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring i ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
(Show Context)
Summary. Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [˜xi − ∆i, ˜xi + ∆i], where ˜xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring instrument). In such situations, instead of the exact value of the sample statistics such as covariance Cx,y, we can only have an interval Cx,y of possible values of this statistic. It is known that in general, computing such an interval Cx,y for Cx,y is an NPhard problem. In this paper, we describe an algorithm that computes this range Cx,y for the case when the measurements are accurate enough – so that the intervals corresponding to different measurements do not intersect much. 1 Introduction: Data Processing
Fast Algorithms for Computing Statistics under Interval Uncertainty, with Applications to Computer Science and to Electrical and Computer Engineering
, 2007
"... Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. For each of these characteristics C, there is an expression C(x1,..., xn) that enables us to provide an estimate for C based on the observed values x1,..., xn. For example: a reasonable statistic for estimating the mean value of a probability distribution is the population average E(x1,..., xn) = 1 n · (x1 +... + xn); a reasonable statistic for estimating the variance V is the population variance V (x1,..., xn) = 1 n · n∑