Results 1  10
of
40
Interval Computations and IntervalRelated Statistical Techniques: Tools for Estimating Uncertainty of the Results of Data Processing and Indirect Measurements
"... In many practical situations, we only know the upper bound ∆ on the (absolute value of the) measurement error ∆x, i.e., we only know that the measurement error is located on the interval [−∆, ∆]. The traditional engineering approach to such situations is to assume that ∆x is uniformly distributed on ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
(Show Context)
In many practical situations, we only know the upper bound ∆ on the (absolute value of the) measurement error ∆x, i.e., we only know that the measurement error is located on the interval [−∆, ∆]. The traditional engineering approach to such situations is to assume that ∆x is uniformly distributed on [−∆, ∆], and to use the corresponding statistical techniques. In some situations, however, this approach underestimates the error of indirect measurements. It is therefore desirable to directly process this interval uncertainty. Such “interval computations” methods have been developed since the 1950s. In this chapter, we provide a brief overview of related algorithms, results, and remaining open problems.
Computing Population Variance and Entropy under Interval Uncertainty: LinearTime Algorithms
, 2006
"... In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the pr ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
In statistical analysis of measurement results it is often necessary to compute the range [V, V] of the population variance V = 1 n · n∑ (xi − E) 2 where E = 1 n · n∑ xi when we only know the intervals i=1 [˜xi − ∆i, ˜xi + ∆i] of possible values of the xi. While V can be computed efficiently, the problem of computing V is, in general, NPhard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm ” (Reliable Computing, 2006, Vol. 12, No. 4, pp. 273–280) we showed that in
Fast Algorithms for Computing Statistics under Interval Uncertainty, with Applications to Computer Science and to Electrical and Computer Engineering
, 2007
"... Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. For each of these characteristics C, there is an expression C(x1,..., xn) that enables us to provide an estimate for C based on the observed values x1,..., xn. For example: a reasonable statistic for estimating the mean value of a probability distribution is the population average E(x1,..., xn) = 1 n · (x1 +... + xn); a reasonable statistic for estimating the variance V is the population variance V (x1,..., xn) = 1 n · n∑
Decision Making under Interval Uncertainty
"... To make a decision, we must find out the user’s preference, and help the user select an alternative which is the best – according to these preferences. Traditional decision theory is based on a simplifying assumption that for each two alternatives, a user can always meaningfully decide which of them ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
To make a decision, we must find out the user’s preference, and help the user select an alternative which is the best – according to these preferences. Traditional decision theory is based on a simplifying assumption that for each two alternatives, a user can always meaningfully decide which of them is preferable. In reality, often, when the alternatives are close, the user is often unable to select one of these alternatives. In this chapter, we show how we can extend the traditional decision theory to such realistic (interval) cases. 1
IntervalBased Uncertainty Handling in ModelBased Prediction of System Quality
, 2010
"... Abstract—Our earlier research indicates feasibility of applying the PREDIQT method for modelbased prediction of impacts of architecture design changes on system quality. The PREDIQT method develops and makes use of the so called prediction models, a central part of which are the “Dependency Views ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Our earlier research indicates feasibility of applying the PREDIQT method for modelbased prediction of impacts of architecture design changes on system quality. The PREDIQT method develops and makes use of the so called prediction models, a central part of which are the “Dependency Views ” (DVs) – weighted trees representing the relationships between system design and the quality notions. The values assigned to the DV parameters originate from domain expert judgments and measurements on the system. However fine grained, the DVs contain a certain degree of uncertainty due to lack and inaccuracy of empirical input. This paper proposes an approach to the representation, propagation and analysis of uncertainties in DVs. Such an approach is essential to facilitate model fitting, identify the kinds of architecture design changes which can be handled by the prediction models, and indicate the value of added information. Based on a set of criteria, we argue analytically and empirically, that our approach is comprehensible, sound, practically useful and better than any other approach we are aware of. Keywordsuncertainty; system quality; prediction; modeling; architecture design. I.
Likelihoodbased Imprecise Regression
"... We introduce a new approach to regression with imprecisely observed data, combining likelihood inference with ideas from imprecise probability theory, and thereby taking different kinds of uncertainty into account. The approach is very general: it provides a uniform theoretical framework for regress ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We introduce a new approach to regression with imprecisely observed data, combining likelihood inference with ideas from imprecise probability theory, and thereby taking different kinds of uncertainty into account. The approach is very general: it provides a uniform theoretical framework for regression analysis with imprecise data, where all kinds of relationships between the variables of interest may be considered and all types of imprecisely observed data are allowed. Furthermore, we propose a regression method based on this approach, where no parametric distributional assumption is needed and likelihoodbased interval estimates of quantiles of the residuals distribution are used to identify a set of plausible descriptions of the relationship of interest. Thus, the proposed regression method is very robust and yields a setvalued result, whose extent is determined by the amounts of both kinds of uncertainty involved in the regression problem with imprecise data: statistical uncertainty and indetermination. In addition, we apply our robust regression method to an interesting question in the social sciences by analyzing data from a social survey. As result we obtain a large set of plausible relationships, reflecting the high uncertainty inherent in the analyzed data set.
A Practical Approach to Uncertainty Handling and Estimate Acquisition in Modelbased Prediction of System Quality
"... Abstract—Our earlier research indicated the feasibility of applying the PREDIQT method for modelbased prediction of impacts of architectural design changes on system quality. The PREDIQT method develops and makes use of so called prediction models, a central part of which are the “Dependency Views ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Our earlier research indicated the feasibility of applying the PREDIQT method for modelbased prediction of impacts of architectural design changes on system quality. The PREDIQT method develops and makes use of so called prediction models, a central part of which are the “Dependency Views ” (DVs) – weighted trees representing the relationships between architectural design and the quality characteristics of a target system. The values assigned to the DV parameters originate from domain expert judgements and measurements on the system. However fine grained, the DVs contain a certain degree of uncertainty due to lack and inaccuracy of empirical input. This paper proposes an approach to the representation, propagation and analysis of uncertainties in DVs. Such an approach is essential to facilitate model fitting (that is, adjustment of models during verification), identify the kinds of architectural design changes which can be handled by the prediction models, and indicate the value of added information. Based on a set of criteria, we argue analytically and empirically, that our uncertainty handling approach is comprehensible, sound, practically useful and better than any other approach we are aware of. Moreover, based on experiences from PREDIQTbased analyses through industrial case studies on reallife systems, we also provide guidelines for use of the approach in practice. The guidelines address the ways of obtaining empirical estimates as well as the means and measures for reducing uncertainty of the estimates. Keywordsuncertainty, system quality prediction, modeling, architectural design, change impact analysis, simulation. I.
RELIABLE SIMULATION WITH INPUT UNCERTAINTIES USING AN INTERVALBASED APPROACH
"... Uncertainty associated with input parameters and models in simulation has gained attentions in recent years. The sources of uncertainties include lack of data and lack of knowledge about physical systems. In this paper, we present a new reliable simulation mechanism to help improve simulation robus ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Uncertainty associated with input parameters and models in simulation has gained attentions in recent years. The sources of uncertainties include lack of data and lack of knowledge about physical systems. In this paper, we present a new reliable simulation mechanism to help improve simulation robustness when significant uncertainties exist. The new mechanism incorporates variabilities and uncertainties based on imprecise probabilities, where the statistical distribution parameters in the simulation are intervals instead of precise real numbers. The mechanism generates random interval variates to model the inputs. Interval arithmetic is applied to simulate a set of scenarios simultaneously in each simulation run. To ensure that the interval results bound those from the traditional realvalued simulation, a generic approach is also proposed to specify the number of replications in order to achieve the desired robustness. This new reliable simulation mechanism can be applied to address input uncertainties to support robust decision making. 1
Estimating Variance under Interval and Fuzzy Uncertainty: Case of Hierarchical Estimation
 Foundations of Fuzzy Logic and Soft Computing, Proc. World Congress of the Int’l Fuzzy Systems Association IFSA’2007, Cancun, Mexico, June 18–21, 2007, Springer Lecture Notes on Artificial Intelligence
"... Traditional data processing in science and engineering starts with computing the basic statistical characteristics such as the population mean E and population variance V. In computing these characteristics, it is usually assumed that the corresponding data values x1,..., xn are known exactly. In ma ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Traditional data processing in science and engineering starts with computing the basic statistical characteristics such as the population mean E and population variance V. In computing these characteristics, it is usually assumed that the corresponding data values x1,..., xn are known exactly. In many practical situations, we only know intervals [x i, xi] that contain the actual (unknown) values of xi or, more generally, a fuzzy number that describes xi. In this case, different possible values of xi lead, in general, to different values of E and V. In such situations, we are interested in producing the intervals of possible values of E and V – or fuzzy numbers describing E and V. There exist algorithms for producing such interval and fuzzy estimates. However, these algorithms are more complex than the typical data processing formulas and thus, require a larger amount of computation time. If we have several processors, then, it is desirable to perform these algorithms in parallel on several processors, and thus, to speed up computations. In this paper, we show how the algorithms for estimating variance under interval and fuzzy uncertainty can be parallelized.
How Much for an Interval? a Set? a Twin Set? a PBox? a Kaucher Interval? Towards an EconomicsMotivated Approach to Decision Making
, 2014
"... Abstract. A natural idea of decision making under uncertainty is to assign a fair price to different alternatives, and then to use these fair prices to select the best alternative. In this paper, we show how to assign a fair price under different types of uncertainty. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. A natural idea of decision making under uncertainty is to assign a fair price to different alternatives, and then to use these fair prices to select the best alternative. In this paper, we show how to assign a fair price under different types of uncertainty.