Results 1  10
of
64
Possibility theory and statistical reasoning
 Computational Statistics & Data Analysis Vol
, 2006
"... Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope of statistical reasoning when uncertainty due to variability of observations should be distinguished f ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
(Show Context)
Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope of statistical reasoning when uncertainty due to variability of observations should be distinguished from uncertainty due to incomplete information. This paper proposes an overview of numerical possibility theory. Its aim is to show that some notions in statistics are naturally interpreted in the language of this theory. First, probabilistic inequalites (like Chebychev’s) offer a natural setting for devising possibility distributions from poor probabilistic information. Moreover, likelihood functions obey the laws of possibility theory when no prior probability is available. Possibility distributions also generalize the notion of confidence or prediction intervals, shedding some light on the role of the mode of asymmetric probability densities in the derivation of maximally informative interval substitutes of probabilistic information. Finally, the simulation of fuzzy sets comes down to selecting a probabilistic representation of a possibility distribution, which coincides with the Shapley value of the corresponding consonant capacity. This selection process is in agreement with Laplace indifference principle and is closely connected with the mean interval of a fuzzy interval. It sheds light on the “defuzzification ” process in fuzzy set theory and provides a natural definition of a subjective possibility distribution that sticks to the Bayesian framework of exchangeable bets. Potential applications to risk assessment are pointed out. 1
Revealing uncertainty for information visualization
 In Proc. AVI’08
, 2008
"... Uncertainty in data occurs in domains ranging from natural science to medicine to computer science. By developing ways to include uncertainty in our information visualizations we can provide more accurate visual depictions of critical datasets. One hindrance to visualizing uncertainty is that we mus ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
Uncertainty in data occurs in domains ranging from natural science to medicine to computer science. By developing ways to include uncertainty in our information visualizations we can provide more accurate visual depictions of critical datasets. One hindrance to visualizing uncertainty is that we must first understand what uncertainty is and how it is expressed by users. We reviewed existing work from several domains on uncertainty and conducted qualitative interviews with 18 people from diverse domains who selfidentified as working with uncertainty. We created a classification of uncertainty representing commonalities in uncertainty across domains and that will be useful for developing appropriate visualizations of uncertainty.
Interval Computations and IntervalRelated Statistical Techniques: Tools for Estimating Uncertainty of the Results of Data Processing and Indirect Measurements
"... In many practical situations, we only know the upper bound ∆ on the (absolute value of the) measurement error ∆x, i.e., we only know that the measurement error is located on the interval [−∆, ∆]. The traditional engineering approach to such situations is to assume that ∆x is uniformly distributed on ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
In many practical situations, we only know the upper bound ∆ on the (absolute value of the) measurement error ∆x, i.e., we only know that the measurement error is located on the interval [−∆, ∆]. The traditional engineering approach to such situations is to assume that ∆x is uniformly distributed on [−∆, ∆], and to use the corresponding statistical techniques. In some situations, however, this approach underestimates the error of indirect measurements. It is therefore desirable to directly process this interval uncertainty. Such “interval computations” methods have been developed since the 1950s. In this chapter, we provide a brief overview of related algorithms, results, and remaining open problems.
Computing Population Variance and Entropy under Interval Uncertainty: LinearTime Algorithms
, 2006
"... In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the pr ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the problem of computing V is, in general, NPhard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm ” (Reliable Computing, 2006, Vol. 12, No. 4, pp. 273–280) we showed that in
Estimating Information Amount under Interval Uncertainty: Algorithmic Solvability and Computational
 Complexity”, Proceedings of the International Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems IPMU’06
"... In most reallife situations, we have uncertainty: we do not know the exact state of the world, there are several (n) different states which are consistent with our knowledge. In such situations, it is desirable to gauge how much information we need to gain to determine the actual state of the world ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
In most reallife situations, we have uncertainty: we do not know the exact state of the world, there are several (n) different states which are consistent with our knowledge. In such situations, it is desirable to gauge how much information we need to gain to determine the actual state of the world. A natural measure of this amount of information is the average number of “yes”“no ” questions that we need to ask to find the exact state. When we know the probabilities p1,..., pn of different states, then, as Shannon has shown, this number of questions can be determined as n∑ S = − pi · log2 (pi). i=1 In many reallife situations, we only have partial information about the probabilities; for example, we may only know intervals pi = [p, p i i] of
Sensitivity in risk analyses with uncertain numbers
, 2006
"... Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is DempsterShafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a “pinching ” strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered. 3
A Spreadsheet Approach to Facilitate Visualization of Uncertainty in Information
, 2008
"... Information uncertainty is inherent in many problems and is often subtle and complicated to understand. Although visualization is a powerful means for exploring and understanding information, information uncertainty visualization is ad hoc and not widespread. This paper identifies two main barriers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Information uncertainty is inherent in many problems and is often subtle and complicated to understand. Although visualization is a powerful means for exploring and understanding information, information uncertainty visualization is ad hoc and not widespread. This paper identifies two main barriers to the uptake of information uncertainty visualization: first, the difficulty of modeling and propagating the uncertainty information and, second, the difficulty of mapping uncertainty to visual elements. To overcome these barriers, we extend the spreadsheet paradigm to encapsulate uncertainty details within cells. This creates an inherent awareness of the uncertainty associated with each variable. The spreadsheet can hide the uncertainty details, enabling the user to think simply in terms of variables. Furthermore, the system can aid with automated propagation of uncertainty information, since it is intrinsically aware of the uncertainty. The system also enables mapping the encapsulated uncertainty to visual elements via the formula language and a visualization sheet. Support for such lowlevel visual mapping provides flexibility to explore new techniques for information uncertainty visualization.
Disaggregated total uncertainty measure for credal sets
 Int. J. Gen. Syst
, 2006
"... We present a new approach to measure uncertainty/information applicable to theories based on convex sets of probability distributions, also called credal sets. A definition of a total disaggregated uncertainty measure on credal sets is proposed in this paper motivated by recent outcomes. This defini ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
We present a new approach to measure uncertainty/information applicable to theories based on convex sets of probability distributions, also called credal sets. A definition of a total disaggregated uncertainty measure on credal sets is proposed in this paper motivated by recent outcomes. This definition is based on the upper and lower values of Shannon’s entropy for a credal set. We justify the use of the proposed total uncertainty measure and the parts into which it is divided: the maximum difference of entropies, which can be used as a nonspecificity measure (imprecision), and the minimum of entropy, which represents a measure of conflict (contradiction).
Statistical Data Processing under Interval Uncertainty: Algorithms and Computational Complexity
 Soft Methods for Integrated Uncertainty Modeling
, 2006
"... Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a na ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Specifically, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we first measure the values of the quantities x1,..., xn, and then we use the results �x1,..., �xn of these measurements to to compute an estimate �y for y as �y = f(�x1,..., �xn): �x1 �x2