Results 1 
7 of
7
Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming
 in Data Processing and in Logic Programming
, 1997
"... We show that fuzzy logic and other soft computing approaches explain and justify heuristic numerical methods used in data processing and in logic programming, in particular, Mmethods in robust statistics, regularization techniques, metric fixed point theorems, etc. Introduction What is soft compu ..."
Abstract

Cited by 18 (16 self)
 Add to MetaCart
We show that fuzzy logic and other soft computing approaches explain and justify heuristic numerical methods used in data processing and in logic programming, in particular, Mmethods in robust statistics, regularization techniques, metric fixed point theorems, etc. Introduction What is soft computing good for? Traditional viewpoint. When are soft computing methods (fuzzy, neural, etc.) mostly used now? Let us take, as an example, control, which is one of the major success stories of soft computing (especially of fuzzy methods; see, e.g., (Klir 1995)). ffl In control, if we know the exact equations that describe the controlled system, and if we know the exact objective function of the control, then we can often apply the optimal control techniques developed in traditional (crisp) control theory and compute the optimal control. Even in these situations, we can, in principle, use soft computing methods instead: e.g., we can use simpler fuzzy control rules instead of (more complicated...
How to take into account dependence between the inputs: from interval computations to constraintrelated set computations, with potential applications to nuclear safety, bio and geosciences
 Proceedings of the Second International Workshop on Reliable Engineering Computing
"... In the traditional interval computations approach to handling uncertainty, we assume that we know the intervals xi of possible values of different parameters xi, and we assume that an arbitrary combination of these values is possible. In geometric terms, in the traditional interval computations appr ..."
Abstract

Cited by 13 (12 self)
 Add to MetaCart
(Show Context)
In the traditional interval computations approach to handling uncertainty, we assume that we know the intervals xi of possible values of different parameters xi, and we assume that an arbitrary combination of these values is possible. In geometric terms, in the traditional interval computations approach, the set of possible combinations x = (x1,..., xn) is a box x = x1 ×... × xn. In many reallife situations, in addition to knowing the intervals xi of possible values of each variable xi, we also know additional restrictions on the possible combinations of xi; in this case, the set x of possible values of x is a subset of the original box. For example, in addition to knowing the bounds on x1 and x2, we may also know that the difference between x1 and x2 cannot exceed a certain amount. Informally speaking, the parameters xi are no longer independent – in the sense that the set of possible values of xi may depend on the values of other parameters. In interval computations, we start with independent inputs; as we follow computations, we get dependent intermediate results: e.g., for x1 − x 2 1, the values of x1
Fast Quantum Algorithms for Handling Probabilistic and Interval Uncertainty
, 2003
"... this paper, we show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty. We end the paper with speculations on whether (and how) "hypothetic" physical devices can compute NPhard problems faster than in exponential time ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
this paper, we show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty. We end the paper with speculations on whether (and how) "hypothetic" physical devices can compute NPhard problems faster than in exponential time
Why Product of Probabilities (Masses) for Independent Events? A Theorem
"... For independent events A and B, the probability P (A & B) is equal to the product of the corresponding probabilities: P (A&B) = P (A) · P (B). It is well known that the product f(a, b) = a·b has the following property: n∑ m∑ ..."
Abstract
 Add to MetaCart
(Show Context)
For independent events A and B, the probability P (A & B) is equal to the product of the corresponding probabilities: P (A&B) = P (A) · P (B). It is well known that the product f(a, b) = a·b has the following property: n∑ m∑
1.1 General Problem of Data Processing under Uncertainty
, 2006
"... In many reallife situations, in addition to knowing the intervals xi of possible values of each variable xi, we also know additional restrictions on the possible combinations of xi; in this case, the set x of possible values of x = (x1,..., xn) is a proper subset of the original box x1 ×...×xn. In ..."
Abstract
 Add to MetaCart
(Show Context)
In many reallife situations, in addition to knowing the intervals xi of possible values of each variable xi, we also know additional restrictions on the possible combinations of xi; in this case, the set x of possible values of x = (x1,..., xn) is a proper subset of the original box x1 ×...×xn. In this paper, we show how to take into account this dependence between the inputs when computing the range of a function f(x1,..., xn). c ○ 2007 World Academic Press, UK. All rights reserved.
mlq header will be provided by the publisher Fast Quantum Algorithms for Handling Probabilistic and Interval Uncertainty
, 2003
"... In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn). Measurements are never 100 % ac ..."
Abstract
 Add to MetaCart
(Show Context)
In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easiertomeasure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn). Measurements are never 100 % accurate; hence, the measured values ˜xi are different from xi, and the resulting estimate ˜y = f(˜x1,..., ˜xn) is different from the desired value y = f(x1,..., xn). How different can it be? Traditional engineering approach to error estimation in data processing assumes that we know the probabildef ities of different measurement errors ∆xi = ˜xi − xi. In many practical situations, we only know the upper bound ∆i for this error; hence, after the measurement, the only information that we have about xi is that it def belongs to the interval xi = [˜xi − ∆i, ˜xi + ∆i]. In this case, it is important to find the range y of all possible values of y = f(x1,..., xn) when xi ∈ xi. We start the paper with a brief overview of the computational complexity of the corresponding interval computation problems. Most of the related problems turn out to be, in general, at least NPhard. In this paper, we show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty. We end the paper with speculations on whether (and how) “hypothetic ” physical devices can compute NPhard problems faster than in exponential time. Most of the paper’s results were first presented at NAFIPS’2003 [30]. Copyright line will be provided by the publisher 1 Introduction: Data Processing