Results 1 
9 of
9
Towards combining probabilistic and interval uncertainty in engineering calculations: algorithms for computing statistics under interval uncertainty, and their computational complexity
 Reliable Computing
, 2006
"... Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, ..."
Abstract

Cited by 41 (40 self)
 Add to MetaCart
Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only measure the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. In this paper, we provide a survey of algorithms for computing various statistics under interval uncertainty and their computational complexity. The survey includes both known and new algorithms.
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
, 2007
"... This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute variou ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Fast Algorithms for Computing Statistics under Interval Uncertainty, with Applications to Computer Science and to Electrical and Computer Engineering
, 2007
"... Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. For each of these characteristics C, there is an expression C(x1,..., xn) that enables us to provide an estimate for C based on the observed values x1,..., xn. For example: a reasonable statistic for estimating the mean value of a probability distribution is the population average E(x1,..., xn) = 1 n · (x1 +... + xn); a reasonable statistic for estimating the variance V is the population variance V (x1,..., xn) = 1 n · n∑
Foundations of Statistical Processing of Setvalued Data: Towards Efficient Algorithms
 Proceedings of the Fifth International Conference on Intelligent Technologies InTech’04
, 2004
"... Abstract — Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [�xi − ∆i, �xi + ∆i], where �xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract — Due to measurement uncertainty, often, instead of the actual values xi of the measured quantities, we only know the intervals xi = [�xi − ∆i, �xi + ∆i], where �xi is the measured value and ∆i is the upper bound on the measurement error (provided, e.g., by the manufacturer of the measuring instrument). These intervals can be viewed as random intervals, i.e., as samples from the intervalvalued random variable. In such situations, instead of the exact value of a sample statistic such as covariance Cx,y, we can only have an interval Cx,y of possible values of this statistic. In this paper, we extend the foundations of traditional statistics to statistics of such setvalued data, and describe how this foundation can lead to efficient algorithms for computing the corresponding setvalued statistics. I. STATISTICAL ESTIMATION:
Towards combining probabilistic and interval uncertainty in engineering calculations
 Proceedings of the Workshop on Reliable Engineering Computing
, 2004
"... Abstract. In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, varia ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only know the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. Such modification are described in this paper.
Statistical Data Processing under Interval Uncertainty: Algorithms and Computational Complexity
 Soft Methods for Integrated Uncertainty Modeling
, 2006
"... Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a na ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Specifically, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we first measure the values of the quantities x1,..., xn, and then we use the results �x1,..., �xn of these measurements to to compute an estimate �y for y as �y = f(�x1,..., �xn): �x1 �x2
ApplicationMotivated Combinations of Fuzzy, Interval, and Probability Approaches, with Application to Geoinformatics,
"... Abstract—Since the 1960s, many algorithms have been designed to deal with interval uncertainty. In the last decade, there has been a lot of progress in extending these algorithms to the case when we have a combination of interval, probabilistic, and fuzzy uncertainty. We provide an overview of relat ..."
Abstract
 Add to MetaCart
Abstract—Since the 1960s, many algorithms have been designed to deal with interval uncertainty. In the last decade, there has been a lot of progress in extending these algorithms to the case when we have a combination of interval, probabilistic, and fuzzy uncertainty. We provide an overview of related algorithms, results, and remaining open problems. I. MAIN PROBLEM Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is dif cult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Speci cally, we nd some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we rst measure the values of the quantities x1,..., xn, and then we use the results ˜x1,..., ˜xn of these measurements to to compute an estimate ˜y for y as ˜y = f(˜x1,..., ˜xn). ˜x1 ˜x2
I. PROBABILISTIC APPROACH IS NEEDED
"... Abstract — In traditional security systems, for each task, we either trust an agent or we don’t. If we trust an agent, we allow this agent full access to this particular task. This agent can usually allow his trusted subagents the same access, etc. If a trust management system only uses “trust ” an ..."
Abstract
 Add to MetaCart
Abstract — In traditional security systems, for each task, we either trust an agent or we don’t. If we trust an agent, we allow this agent full access to this particular task. This agent can usually allow his trusted subagents the same access, etc. If a trust management system only uses “trust ” and “no trust” options, then a person should trust everyone in this potentially long chain. The problem is that trust is rarely a complete trust, there is a certain probability of distrust. So, when the chain becomes long, the probability of a security leak increases. It is desirable to keep track of trust probabilities, so that we should only delegate to agents whose trust is above a certain threshold. In this paper, we present efficient algorithms for handling such probabilities.
TO ELECTRICAL AND COMPUTER ENGINEERING
"... Dean of the Graduate SchoolThis dissertation is dedicated to my deeply loved grandfather who passed away in 2003. His wish to pursue a graduate study could not be fulfilled due to World War II. And to my parents, my wife Qianyin and my son Kevin for their great love. FAST ALGORITHMS FOR COMPUTING ST ..."
Abstract
 Add to MetaCart
Dean of the Graduate SchoolThis dissertation is dedicated to my deeply loved grandfather who passed away in 2003. His wish to pursue a graduate study could not be fulfilled due to World War II. And to my parents, my wife Qianyin and my son Kevin for their great love. FAST ALGORITHMS FOR COMPUTING STATISTICS