Results 1  10
of
11
Fast algorithm for computing the upper endpoint of sample variance for interval data: case of sufficiently accurate measurements
 Reliable Computing
, 2006
"... When we have n results x1,..., xn of repeated measurement of the same quantity, the traditional statistical approach usually starts with computing their sample average E and their sample variance V. Often, due to the inevitable measurement uncertainty, we do not know the exact values of the quantiti ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
When we have n results x1,..., xn of repeated measurement of the same quantity, the traditional statistical approach usually starts with computing their sample average E and their sample variance V. Often, due to the inevitable measurement uncertainty, we do not know the exact values of the quantities, we only know the intervals xi of possible values of xi. In such situations, for different possible values xi ∈ xi, we get different values of the variance. We must therefore find the range V of possible values of V. It is known that in general, this problem is NPhard. For the case when the measurements are sufficiently accurate (in some precise sense), it is known that we can compute the interval V in quadratic time O(n 2). In this paper, we describe a new algorithm for computing V that requires time O(n · log(n)) (which is much faster than O(n 2)). 1
Fast Algorithms for Computing Statistics under Interval Uncertainty, with Applications to Computer Science and to Electrical and Computer Engineering
, 2007
"... Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. For each of these characteristics C, there is an expression C(x1,..., xn) that enables us to provide an estimate for C based on the observed values x1,..., xn. For example: a reasonable statistic for estimating the mean value of a probability distribution is the population average E(x1,..., xn) = 1 n · (x1 +... + xn); a reasonable statistic for estimating the variance V is the population variance V (x1,..., xn) = 1 n · n∑
Towards combining probabilistic and interval uncertainty in engineering calculations
 Proceedings of the Workshop on Reliable Engineering Computing
, 2004
"... Abstract. In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, varia ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only know the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. Such modification are described in this paper.
Statistical Data Processing under Interval Uncertainty: Algorithms and Computational Complexity
 Soft Methods for Integrated Uncertainty Modeling
, 2006
"... Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a na ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Why indirect measurements? In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. Examples of such quantities are the distance to a star and the amount of oil in a given well. Since we cannot measure y directly, a natural idea is to measure y indirectly. Specifically, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn); this relation may be a simple functional transformation, or complex algorithm (e.g., for the amount of oil, numerical solution to an inverse problem). Then, to estimate y, we first measure the values of the quantities x1,..., xn, and then we use the results �x1,..., �xn of these measurements to to compute an estimate �y for y as �y = f(�x1,..., �xn): �x1 �x2
Multidimensional IntervalData: Metrics and Factorial Analysis
"... Abstract. Statistical units described by intervalvalued variables represent a special case of Symbolic Objects, where all descriptors are quantitative variables. In this context, the paper presents two different metrics in R p for intervalvalued data that are based on the definition of the Hausdor ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Statistical units described by intervalvalued variables represent a special case of Symbolic Objects, where all descriptors are quantitative variables. In this context, the paper presents two different metrics in R p for intervalvalued data that are based on the definition of the Hausdorff distance in R. Hausdorff distance in R p (for any p≥1) is a L ∞ norm between pairs of closed sets. However, when p> 1 the problem complexity leads towards the definition of L2 norms approximating as well as possible the Hausdorff distance. Given a set of n units described by p intervalvalued variables, we compute and represent the distances over factorial planes that are defined by factorial analyses that are consistent with the two distance measure definitions.
Abstract
"... This paper addresses the problem of market risk management for a company in the electricity industry. When dealing with corporate volumetric exposure, there is a need for a methodology that helps to manage the aggregate risks in energy markets. The originality of the approach presented lies in the u ..."
Abstract
 Add to MetaCart
This paper addresses the problem of market risk management for a company in the electricity industry. When dealing with corporate volumetric exposure, there is a need for a methodology that helps to manage the aggregate risks in energy markets. The originality of the approach presented lies in the use of intervals to formulate a specific portfolio optimization problem under stochastic dominance constraints.
Acknowledgements
, 2007
"... This dissertation is dedicated to my deeply loved grandfather who passed away in 2003. His wish to pursue a graduate study could not be fulfilled due to World War II. And to my parents, my wife Qianyin and my son Kevin for their great love. ..."
Abstract
 Add to MetaCart
This dissertation is dedicated to my deeply loved grandfather who passed away in 2003. His wish to pursue a graduate study could not be fulfilled due to World War II. And to my parents, my wife Qianyin and my son Kevin for their great love.