Results 1  10
of
41
Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "BlackBox" Programs
 Handbook on Randomized Computing, Kluwer, 2001
, 2000
"... In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
Astrogeometry, Error Estimation, and Other Applications of SetValued Analysis
 ACM SIGNUM Newsletter
, 1996
"... In many reallife application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets: ffl In image processing (e.g., in astronomy), the desired blackand ..."
Abstract

Cited by 27 (26 self)
 Add to MetaCart
In many reallife application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets: ffl In image processing (e.g., in astronomy), the desired blackandwhite image is, from the mathematical viewpoint, a set.
A New CauchyBased BlackBox Technique for Uncertainty in Risk Analysis
 in Risk Analysis, Reliability Engineering and Systems Safety
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new blackbox technique for processing this type of uncertainty.
Population Variance under Interval Uncertainty: A
 New Algorithm, Reliable Computing
, 2006
"... In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance V = 1 n · n∑ (xi − E) i=1 ..."
Abstract

Cited by 20 (17 self)
 Add to MetaCart
In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance V = 1 n · n∑ (xi − E) i=1
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
, 2007
"... This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute variou ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Outlier Detection Under Interval Uncertainty: Algorithmic Solvability and Computational Complexity
, 2003
"... In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1 ; : : : ; xn , compute the sample average E, the sample standard variation oe, and then mark a value x as an outlier if x is outside ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1 ; : : : ; xn , compute the sample average E, the sample standard variation oe, and then mark a value x as an outlier if x is outside the k0sigma interval [E \Gamma k0 \Delta oe; E+k0 \Delta oe] (for some preselected parameter k0 ). In real life, we often have only interval ranges [x i ; x i ] for the normal values x1 ; : : : ; xn . In this case, we only have intervals of possible values for the bounds E \Gamma k0 \Delta oe and E+k0 \Delta oe. We can therefore identify outliers as values that are outside all k0sigma intervals.
Exact bounds on finite populations of interval data
 Reliable Computing
, 2001
"... In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the fin ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the finite population variance function of interval data. We prove that the problem of computing the upper bound σ 2 is, in general, NPhard. We provide a feasible algorithm that computes σ 2 under reasonable easily verifiable conditions, and provide preliminary results on computing other functions of finite populations. 1
IntervalValued and FuzzyValued Random Variables: From Computing Sample Variances to Computing Sample Covariances
 Soft Methodology and Random Information Systems, SpringerVerlag, 2004
"... Summary. Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [˜xi − ∆i, ˜xi + ∆i], where ˜xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring i ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Summary. Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [˜xi − ∆i, ˜xi + ∆i], where ˜xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring instrument). In such situations, instead of the exact value of the sample statistics such as covariance Cx,y, we can only have an interval Cx,y of possible values of this statistic. It is known that in general, computing such an interval Cx,y for Cx,y is an NPhard problem. In this paper, we describe an algorithm that computes this range Cx,y for the case when the measurements are accurate enough – so that the intervals corresponding to different measurements do not intersect much. 1 Introduction: Data Processing
Fast Quantum Algorithms for Handling Probabilistic, Interval, and Fuzzy Uncertainty
, 2003
"... We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty. ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty.
Fast algorithm for computing the upper endpoint of sample variance for interval data: case of sufficiently accurate measurements
 Reliable Computing
, 2006
"... When we have n results x1,..., xn of repeated measurement of the same quantity, the traditional statistical approach usually starts with computing their sample average E and their sample variance V. Often, due to the inevitable measurement uncertainty, we do not know the exact values of the quantiti ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
When we have n results x1,..., xn of repeated measurement of the same quantity, the traditional statistical approach usually starts with computing their sample average E and their sample variance V. Often, due to the inevitable measurement uncertainty, we do not know the exact values of the quantities, we only know the intervals xi of possible values of xi. In such situations, for different possible values xi ∈ xi, we get different values of the variance. We must therefore find the range V of possible values of V. It is known that in general, this problem is NPhard. For the case when the measurements are sufficiently accurate (in some precise sense), it is known that we can compute the interval V in quadratic time O(n 2). In this paper, we describe a new algorithm for computing V that requires time O(n · log(n)) (which is much faster than O(n 2)). 1