Results 1  10
of
14
Towards combining probabilistic and interval uncertainty in engineering calculations: algorithms for computing statistics under interval uncertainty, and their computational complexity
 Reliable Computing
, 2006
"... Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, ..."
Abstract

Cited by 41 (40 self)
 Add to MetaCart
Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only measure the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. In this paper, we provide a survey of algorithms for computing various statistics under interval uncertainty and their computational complexity. The survey includes both known and new algorithms.
Novel Approaches to Numerical Software with Result Verification
 NUMERICAL SOFTWARE WITH RESULT VERIFICATION, INTERNATIONAL DAGSTUHL SEMINAR, DAGSTUHL
, 2003
"... Traditional design of numerical software with result verification is based on the assumption that we know the algorithm ¦¨§� © ©���� £��������� � that transforms input © ©�� into �� � £��������� � ©���� the output, and we £��������� � know the intervals of possible values of the inputs. Many real ..."
Abstract

Cited by 26 (18 self)
 Add to MetaCart
Traditional design of numerical software with result verification is based on the assumption that we know the algorithm ¦¨§� © ©���� £��������� � that transforms input © ©�� into �� � £��������� � ©���� the output, and we £��������� � know the intervals of possible values of the inputs. Many reallife problems go beyond this paradigm. In some cases, we do not have an algorithm ¦, we only know some relation (constraints) between ©� � and. In other cases, in addition to knowing the intervals, we may know some relations between; we may have some information about the probabilities of different values of © � , and we may know the exact values of some of the inputs (e.g., we may know that © £ ���¨�� �). In this paper, we describe the approaches for solving these reallife problems. In Section 2, we describe interval consistency techniques related to handling constraints; in Section 3, we describe techniques that take probabilistic information into consideration, and in Section 4, we overview techniques for processing exact real numbers.
Computing Population Variance and Entropy under Interval Uncertainty: LinearTime Algorithms
, 2006
"... In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the pr ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the problem of computing V is, in general, NPhard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm ” (Reliable Computing, 2006, Vol. 12, No. 4, pp. 273–280) we showed that in
Detecting Outliers under Interval Uncertainty: A New Algorithm Based on Constraint Satisfaction
 Proceedings of the International Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems IPMU’06
"... In many application areas, it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some “normal ” values x1,..., xn, compute the sample average E, the sample standard deviation σ, and then mark a value x as an outlier if x is outside the k0 ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
In many application areas, it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some “normal ” values x1,..., xn, compute the sample average E, the sample standard deviation σ, and then mark a value x as an outlier if x is outside the k0sigma interval [E − k0 · σ, E + k0 · σ] (for some preselected parameter k0). In real life, we often have only interval ranges [xi, xi] for the normal values x1,..., xn. In this case, we only have intervals of possible values for the bounds L def = E −k0 ·σ and U def = E +k0 ·σ. We can therefore identify outliers as values that are outside all k0sigma intervals, i.e., values which are outside the interval [L, U]. In general, the problem of computing L and U is NPhard; a polynomialtime
Statistical Data Processing under Interval Uncertainty: Algorithms and Computational Complexity
 Soft Methods for Integrated Uncertainty Modeling
, 2006
"... Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a na ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Specifically, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we first measure the values of the quantities x1,..., xn, and then we use the results �x1,..., �xn of these measurements to to compute an estimate �y for y as �y = f(�x1,..., �xn): �x1 �x2
Adding Constraints to Situations When, In Addition to Intervals, We Also Have Partial Information about Probabilities
"... In many practical situations, we need to combine probabilistic and interval uncertainty. For example, we need to n∑ xi or compute statistics like population mean E = 1 n · population variance V = 1 n · n∑ i=1 i=1 (xi − E) 2 in the situations when we only know intervals xi of possible values of xi. I ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In many practical situations, we need to combine probabilistic and interval uncertainty. For example, we need to n∑ xi or compute statistics like population mean E = 1 n · population variance V = 1 n · n∑ i=1 i=1 (xi − E) 2 in the situations when we only know intervals xi of possible values of xi. In this case, it is desirable to compute the range of the
Computing Mean and Variance Under DempsterShafer Uncertainty: Towards Faster Algorithms
"... In many reallife situations, we only have partial information about the actual probability distribution. For example, under DempsterShafer uncertainty, we only know the masses m1,..., mn assigned to different sets S1,..., Sn, but we do not know the distribution within each set Si. Because of this ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In many reallife situations, we only have partial information about the actual probability distribution. For example, under DempsterShafer uncertainty, we only know the masses m1,..., mn assigned to different sets S1,..., Sn, but we do not know the distribution within each set Si. Because of this uncertainty, there are many possible probability distributions consistent with our knowledge; different distributions have, in general, different values of standard statistical characteristics such as mean and variance. It is therefore desirable, given a DempsterShafer knowledge base, to compute the ranges [E, E] and [V, V] of possible values of mean E and of variance V. In their recent paper, A. T. Langewisch and F. F. Choobineh show how to compute these ranges in polynomial time. In particular, they reduce the problem of computing V to the problem of minimizing a convex quadratic function, a problem which can be solved in time O(n 2 · log(n)). We show
ApplicationMotivated Combinations of Fuzzy, Interval, and Probability Approaches, with Application to Geoinformatics,
"... Abstract—Since the 1960s, many algorithms have been designed to deal with interval uncertainty. In the last decade, there has been a lot of progress in extending these algorithms to the case when we have a combination of interval, probabilistic, and fuzzy uncertainty. We provide an overview of relat ..."
Abstract
 Add to MetaCart
Abstract—Since the 1960s, many algorithms have been designed to deal with interval uncertainty. In the last decade, there has been a lot of progress in extending these algorithms to the case when we have a combination of interval, probabilistic, and fuzzy uncertainty. We provide an overview of related algorithms, results, and remaining open problems. I. MAIN PROBLEM Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is dif cult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Speci cally, we nd some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we rst measure the values of the quantities x1,..., xn, and then we use the results ˜x1,..., ˜xn of these measurements to to compute an estimate ˜y for y as ˜y = f(˜x1,..., ˜xn). ˜x1 ˜x2
TO ELECTRICAL AND COMPUTER ENGINEERING
"... Dean of the Graduate SchoolThis dissertation is dedicated to my deeply loved grandfather who passed away in 2003. His wish to pursue a graduate study could not be fulfilled due to World War II. And to my parents, my wife Qianyin and my son Kevin for their great love. FAST ALGORITHMS FOR COMPUTING ST ..."
Abstract
 Add to MetaCart
Dean of the Graduate SchoolThis dissertation is dedicated to my deeply loved grandfather who passed away in 2003. His wish to pursue a graduate study could not be fulfilled due to World War II. And to my parents, my wife Qianyin and my son Kevin for their great love. FAST ALGORITHMS FOR COMPUTING STATISTICS
Computing StandardDeviationtoMean and VariancetoMean Ratios under Interval Uncertainty Is NPHard
"... Once we have a collection of values x1,..., xn corresponding a class of objects, a usual way to decide whether a new object with the value x of the corresponding property belongs to this class is to check whether the value x belongs to interval [E −k ·σ, E +k ·σ], where E def = 1 n · n∑ xi is the sa ..."
Abstract
 Add to MetaCart
Once we have a collection of values x1,..., xn corresponding a class of objects, a usual way to decide whether a new object with the value x of the corresponding property belongs to this class is to check whether the value x belongs to interval [E −k ·σ, E +k ·σ], where E def = 1 n · n∑ xi is the sample mean, σ = √ V, where V def = 1 n · i=1 n∑ (xi−E) 2 is the sample variance, and the parameter k is determined by the degree of confidence with which we want to make the decision. For each value x, the degree of confidence that x belongs to the class depends on the smallest value k for which x ∈ [E − k · σ, E + k · σ], i.e., on the ratio r def = 1 k =