Results 1  10
of
50
Towards combining probabilistic and interval uncertainty in engineering calculations: algorithms for computing statistics under interval uncertainty, and their computational complexity
 Reliable Computing
, 2006
"... Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, ..."
Abstract

Cited by 41 (40 self)
 Add to MetaCart
Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only measure the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. In this paper, we provide a survey of algorithms for computing various statistics under interval uncertainty and their computational complexity. The survey includes both known and new algorithms.
Novel Approaches to Numerical Software with Result Verification
 NUMERICAL SOFTWARE WITH RESULT VERIFICATION, INTERNATIONAL DAGSTUHL SEMINAR, DAGSTUHL
, 2003
"... Traditional design of numerical software with result verification is based on the assumption that we know the algorithm ¦¨§� © ©���� £��������� � that transforms input © ©�� into �� � £��������� � ©���� the output, and we £��������� � know the intervals of possible values of the inputs. Many real ..."
Abstract

Cited by 26 (18 self)
 Add to MetaCart
Traditional design of numerical software with result verification is based on the assumption that we know the algorithm ¦¨§� © ©���� £��������� � that transforms input © ©�� into �� � £��������� � ©���� the output, and we £��������� � know the intervals of possible values of the inputs. Many reallife problems go beyond this paradigm. In some cases, we do not have an algorithm ¦, we only know some relation (constraints) between ©� � and. In other cases, in addition to knowing the intervals, we may know some relations between; we may have some information about the probabilities of different values of © � , and we may know the exact values of some of the inputs (e.g., we may know that © £ ���¨�� �). In this paper, we describe the approaches for solving these reallife problems. In Section 2, we describe interval consistency techniques related to handling constraints; in Section 3, we describe techniques that take probabilistic information into consideration, and in Section 4, we overview techniques for processing exact real numbers.
Possibility theory and statistical reasoning
 Computational Statistics & Data Analysis Vol
, 2006
"... Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope of statistical reasoning when uncertainty due to variability of observations should be distinguished f ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope of statistical reasoning when uncertainty due to variability of observations should be distinguished from uncertainty due to incomplete information. This paper proposes an overview of numerical possibility theory. Its aim is to show that some notions in statistics are naturally interpreted in the language of this theory. First, probabilistic inequalites (like Chebychev’s) offer a natural setting for devising possibility distributions from poor probabilistic information. Moreover, likelihood functions obey the laws of possibility theory when no prior probability is available. Possibility distributions also generalize the notion of confidence or prediction intervals, shedding some light on the role of the mode of asymmetric probability densities in the derivation of maximally informative interval substitutes of probabilistic information. Finally, the simulation of fuzzy sets comes down to selecting a probabilistic representation of a possibility distribution, which coincides with the Shapley value of the corresponding consonant capacity. This selection process is in agreement with Laplace indifference principle and is closely connected with the mean interval of a fuzzy interval. It sheds light on the “defuzzification ” process in fuzzy set theory and provides a natural definition of a subjective possibility distribution that sticks to the Bayesian framework of exchangeable bets. Potential applications to risk assessment are pointed out. 1
Population Variance under Interval Uncertainty: A
 New Algorithm, Reliable Computing
, 2006
"... In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance V = 1 n · n∑ (xi − E) i=1 ..."
Abstract

Cited by 20 (17 self)
 Add to MetaCart
In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance V = 1 n · n∑ (xi − E) i=1
Outlier Detection Under Interval Uncertainty: Algorithmic Solvability and Computational Complexity
, 2003
"... In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1 ; : : : ; xn , compute the sample average E, the sample standard variation oe, and then mark a value x as an outlier if x is outside ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1 ; : : : ; xn , compute the sample average E, the sample standard variation oe, and then mark a value x as an outlier if x is outside the k0sigma interval [E \Gamma k0 \Delta oe; E+k0 \Delta oe] (for some preselected parameter k0 ). In real life, we often have only interval ranges [x i ; x i ] for the normal values x1 ; : : : ; xn . In this case, we only have intervals of possible values for the bounds E \Gamma k0 \Delta oe and E+k0 \Delta oe. We can therefore identify outliers as values that are outside all k0sigma intervals.
Joint Propagation and Exploitation of Probabilistic and Possibilistic Information in Risk Assessment Models
 IEEE Transaction on Fuzzy Systems, vol 14, Issue
, 2006
"... Abstract — Random variability and imprecision are two distinct facets of the uncertainty affecting parameters that influence the assessment of risk. While random variability can be represented by probability distribution functions, imprecision (or partial ignorance) is better accounted for by possib ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
Abstract — Random variability and imprecision are two distinct facets of the uncertainty affecting parameters that influence the assessment of risk. While random variability can be represented by probability distribution functions, imprecision (or partial ignorance) is better accounted for by possibility distributions (or families of probability distributions). Because practical situations of risk computation often involve both types of uncertainty, methods are needed to combine these two modes of uncertainty representation in the propagation step. A hybrid method is presented here, which jointly propagates probabilistic and possibilistic uncertainty. It produces results in the form of a random fuzzy interval. This paper focuses on how to properly summarize this kind of information; and how to address questions pertaining to the potential violation of some tolerance threshold. While exploitation procedures proposed previously entertain a confusion between variability and imprecision, thus yielding overly conservative results, a new approach is proposed, based on the theory of evidence, and is illustrated using synthetic examples.
The Empirical Variance of a Set of Fuzzy Intervals
 IEEE Int. Conf on Fuzzy Systems, Reno (Nevada
"... Abstract — The profile method gives a tool to perform fuzzy interval computation under a condition of local monotony of considered functions. This is a plain extension of interval analysis to fuzzy intervals, viewed as pairs of fuzzy bounds. This method yields exact results without applying interval ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Abstract — The profile method gives a tool to perform fuzzy interval computation under a condition of local monotony of considered functions. This is a plain extension of interval analysis to fuzzy intervals, viewed as pairs of fuzzy bounds. This method yields exact results without applying interval analysis to αcuts. After a refresher on the notion of profile and its use in fuzzy interval analysis, we adapt the profile method to the computation of the empirical variance of a tuple of fuzzy intervals. To this end, we first reconsider results obtained by Ferson et al. on computation of the empirical variance of a set of intervals. Finally we apply our results to the definition of the variance of a single fuzzy interval,viewed as a family of its αcuts, and compare this definition to previous ones. I.
Exact bounds on finite populations of interval data
 Reliable Computing
, 2001
"... In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the fin ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the finite population variance function of interval data. We prove that the problem of computing the upper bound σ 2 is, in general, NPhard. We provide a feasible algorithm that computes σ 2 under reasonable easily verifiable conditions, and provide preliminary results on computing other functions of finite populations. 1
Fast Quantum Algorithms for Handling Probabilistic, Interval, and Fuzzy Uncertainty
, 2003
"... We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty. ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty.
Fast algorithm for computing the upper endpoint of sample variance for interval data: case of sufficiently accurate measurements
 Reliable Computing
, 2006
"... When we have n results x1,..., xn of repeated measurement of the same quantity, the traditional statistical approach usually starts with computing their sample average E and their sample variance V. Often, due to the inevitable measurement uncertainty, we do not know the exact values of the quantiti ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
When we have n results x1,..., xn of repeated measurement of the same quantity, the traditional statistical approach usually starts with computing their sample average E and their sample variance V. Often, due to the inevitable measurement uncertainty, we do not know the exact values of the quantities, we only know the intervals xi of possible values of xi. In such situations, for different possible values xi ∈ xi, we get different values of the variance. We must therefore find the range V of possible values of V. It is known that in general, this problem is NPhard. For the case when the measurements are sufficiently accurate (in some precise sense), it is known that we can compute the interval V in quadratic time O(n 2). In this paper, we describe a new algorithm for computing V that requires time O(n · log(n)) (which is much faster than O(n 2)). 1