Results 1  10
of
14
Towards combining probabilistic and interval uncertainty in engineering . . .
, 2006
"... ..."
(Show Context)
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
 11733, SAND20070939. hal00839639, version 1  28 Jun 2013
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, ..."
Abstract

Cited by 39 (20 self)
 Add to MetaCart
Sandia is a multiprogram laboratory operated by Sandia Corporation,
Fast Algorithms for Computing Statistics under Interval Uncertainty, with Applications to Computer Science and to Electrical and Computer Engineering
, 2007
"... Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. For each of these characteristics C, there is an expression C(x1,..., xn) that enables us to provide an estimate for C based on the observed values x1,..., xn. For example: a reasonable statistic for estimating the mean value of a probability distribution is the population average E(x1,..., xn) = 1 n · (x1 +... + xn); a reasonable statistic for estimating the variance V is the population variance V (x1,..., xn) = 1 n · n∑
Statistical Data Processing under Interval Uncertainty: Algorithms and Computational Complexity
 Soft Methods for Integrated Uncertainty Modeling
, 2006
"... Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a na ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Specifically, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we first measure the values of the quantities x1,..., xn, and then we use the results �x1,..., �xn of these measurements to to compute an estimate �y for y as �y = f(�x1,..., �xn): �x1 �x2
Towards combining probabilistic and interval uncertainty in engineering calculations
 Proceedings of the Workshop on Reliable Engineering Computing
, 2004
"... Abstract. In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, varia ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only know the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. Such modification are described in this paper.
Foundations of Statistical Processing of Setvalued Data: Towards Efficient Algorithms
 Proceedings of the Fifth International Conference on Intelligent Technologies InTech’04
, 2004
"... Abstract — Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [�xi − ∆i, �xi + ∆i], where �xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract — Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [�xi − ∆i, �xi + ∆i], where �xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring instrument). These intervals can be viewed as random intervals, i.e., as samples from the intervalvalued random variable. In such situations, instead of the exact value of a sample statistic such as covariance Cx,y, we can only have an interval Cx,y of possible values of this statistic. In this paper, we extend the foundations of traditional statistics to statistics of such setvalued data, and describe how this foundation can lead to efficient algorithms for computing the corresponding setvalued statistics. I. STATISTICAL ESTIMATION:
I. PROBABILISTIC APPROACH IS NEEDED
"... Abstract — In traditional security systems, for each task, we either trust an agent or we don’t. If we trust an agent, we allow this agent full access to this particular task. This agent can usually allow his trusted subagents the same access, etc. If a trust management system only uses “trust ” an ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — In traditional security systems, for each task, we either trust an agent or we don’t. If we trust an agent, we allow this agent full access to this particular task. This agent can usually allow his trusted subagents the same access, etc. If a trust management system only uses “trust ” and “no trust” options, then a person should trust everyone in this potentially long chain. The problem is that trust is rarely a complete trust, there is a certain probability of distrust. So, when the chain becomes long, the probability of a security leak increases. It is desirable to keep track of trust probabilities, so that we should only delegate to agents whose trust is above a certain threshold. In this paper, we present efficient algorithms for handling such probabilities.
Applicationmotivated combinations of fuzzy, interval, and probability approaches, and their use in geoinformatics, bioinformatics, and engineering
 INT. J. AUTOMATION AND CONTROL
, 2007
"... ..."
(Show Context)
I. PROBABILISTIC APPROACH IS NEEDED A. Traditional Approach to Trust Management: Brief Idea
"... Abstract — In traditional security systems, for each task, we either trust an agent or we don’t. If we trust an agent, we allow this agent full access to this particular task. This agent can usually allow his trusted subagents the same access, etc. If a trust management system only uses “trust ” an ..."
Abstract
 Add to MetaCart
Abstract — In traditional security systems, for each task, we either trust an agent or we don’t. If we trust an agent, we allow this agent full access to this particular task. This agent can usually allow his trusted subagents the same access, etc. If a trust management system only uses “trust ” and “no trust” options, then a person should trust everyone in this potentially long chain. The problem is that trust is rarely a complete trust, there is a certain probability of distrust. So, when the chain becomes long, the probability of a security leak increases. It is desirable to keep track of trust probabilities, so that we should only delegate to agents whose trust is above a certain threshold. In this paper, we present efficient algorithms for handling such probabilities.