Results 1  10
of
30
A New CauchyBased BlackBox Technique for Uncertainty in Risk Analysis
 in Risk Analysis, Reliability Engineering and Systems Safety
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract

Cited by 28 (13 self)
 Add to MetaCart
(Show Context)
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new blackbox technique for processing this type of uncertainty.
Fast Quantum Algorithms for Handling Probabilistic, Interval, and Fuzzy Uncertainty
, 2003
"... We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty. ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
(Show Context)
We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty.
Intervalbased Robust Statistical Techniques for Nonnegative Convex Functions, with Application to Timing Analysis of Computer Chips
 Proceedings of the Second International Workshop on Reliable Engineering Computing
, 2006
"... In chip design, one of the main objectives is to decrease its clock cycle. On the design stage, this time is usually estimated by using worstcase (interval) techniques, in which we only use the bounds on the parameters that lead to delays. This analysis does not take into account that the probabili ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
In chip design, one of the main objectives is to decrease its clock cycle. On the design stage, this time is usually estimated by using worstcase (interval) techniques, in which we only use the bounds on the parameters that lead to delays. This analysis does not take into account that the probability of the worstcase values is usually very small; thus, the resulting estimates are overconservative, leading to unnecessary overdesign and underperformance of circuits. If we knew the exact probability distributions of the corresponding parameters, then we could use MonteCarlo simulations (or the corresponding analytical techniques) to get the desired estimates. In practice, however, we only have partial information about the corresponding distributions, and we want to produce estimates that are valid for all distributions which are consistent with this information.
Probabilities, intervals, what next? Extension of interval computations to situations with partial information about probabilities
 Proceedings of the 10th IMEKO TC7 International Symposium on Advances of Measurement Science
, 2004
"... Abstract. In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn). Measurements are neve ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn). Measurements are never 100 % accurate; hence, the measured values �xi are different from xi, and the resulting estimate �y = f(�x1,..., �xn) is different from the desired value y = f(x1,..., xn). How different? Traditional engineering to error estimation in data processing assumes that we know the probabilities of different def measurement error ∆xi = �xi − xi. In many practical situations, we only know the upper bound ∆i for this error; hence, after the measurement, the only information that we have about xi is that it belongs def to the interval xi = [�xi − ∆i, �xi + ∆i]. In this case, it is important to find the range y of all possible values of y = f(x1,..., xn) when xi ∈ xi. We start with a brief overview of the corresponding interval computation problems. We then discuss what to do when, in addition to the upper bounds ∆i, we have some partial information about the probabilities of different values of ∆xi.
Fast Quantum Algorithms for Handling Probabilistic and Interval Uncertainty
, 2003
"... this paper, we show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty. We end the paper with speculations on whether (and how) "hypothetic" physical devices can compute NPhard problems faster than in exponential time ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
this paper, we show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty. We end the paper with speculations on whether (and how) "hypothetic" physical devices can compute NPhard problems faster than in exponential time
Sensitivity in risk analyses with uncertain numbers
, 2006
"... Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is DempsterShafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a “pinching ” strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered. 3
Interval Computations and IntervalRelated Statistical Techniques: Tools for Estimating Uncertainty of the Results of Data Processing and Indirect Measurements
"... In many practical situations, we only know the upper bound ∆ on the (absolute value of the) measurement error ∆x, i.e., we only know that the measurement error is located on the interval [−∆, ∆]. The traditional engineering approach to such situations is to assume that ∆x is uniformly distributed on ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
In many practical situations, we only know the upper bound ∆ on the (absolute value of the) measurement error ∆x, i.e., we only know that the measurement error is located on the interval [−∆, ∆]. The traditional engineering approach to such situations is to assume that ∆x is uniformly distributed on [−∆, ∆], and to use the corresponding statistical techniques. In some situations, however, this approach underestimates the error of indirect measurements. It is therefore desirable to directly process this interval uncertainty. Such “interval computations” methods have been developed since the 1950s. In this chapter, we provide a brief overview of related algorithms, results, and remaining open problems.
V.: Estimating Probability of Failure of a Complex System Based on
 Partial Information about Subsystems and Components, with Potential Applications to Aircraft Maintenance, Proceedings of the International Workshop on Soft Computing Applications and Knowledge Discovery SCAKD’2011
, 2011
"... Abstract. In many reallife applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In many reallife applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result, for each possible set of failed components, we can tell whether this set will lead to a system failure. For each component A, we know the probability P (A) of its failure with some uncertainty: e.g., we know the lower and upper bounds P (A) and P (A) for this probability. Usually, it is assumed that failures of different components are independent events. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe a new efficient method for such estimation based on Cauchy deviates.
T.: “Ellipsoids and EllipsoidShaped Fuzzy Sets as Natural MultiVariate Generalization of Intervals and Fuzzy Numbers: How to Elicit Them from Users, and How to Use Them
 in Data Processing”, Information Sciences
"... In this paper, we show that ellipsoids are natural multivariate generalization of intervals and ellipsoidshaped fuzzy sets are a natural generalization of fuzzy numbers. We explain how to elicit them from users, and how to use them in data processing. Key words: ellipsoids, fuzzy sets, knowledge e ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we show that ellipsoids are natural multivariate generalization of intervals and ellipsoidshaped fuzzy sets are a natural generalization of fuzzy numbers. We explain how to elicit them from users, and how to use them in data processing. Key words: ellipsoids, fuzzy sets, knowledge elicitation, data processing 1 General Formulation of the Problem Complex systems consist of many interacting subsystems all of which contribute to the main objectives of the system. With respect to each objective,
Managing Uncertainty in Engineering Design Using Imprecise
 Probabilities and Principles of Information Economics, Georgia Institute of Technology. Ph.D. Thesis
, 2006
"... OF INFORMATION ECONOMICS ..."
(Show Context)