Results 11  20
of
146
The Lattice of Fuzzy Intervals and Sufficient Conditions for Its Distributivity
, 2001
"... Given a reference lattice (X, _), we define fuzzy intervals to be the fuzzy sets such that their p cuts are crisp closed intervals of (X, _). We show that: given a complete lattice (X, _) the collection of its fuzzy intervals is a complete lattice. Furthermore we show that: if (X, _) is completel ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Given a reference lattice (X, _), we define fuzzy intervals to be the fuzzy sets such that their p cuts are crisp closed intervals of (X, _). We show that: given a complete lattice (X, _) the collection of its fuzzy intervals is a complete lattice. Furthermore we show that: if (X, _) is completely distributive then the lattice of its fuzzy intervals is distributive.
On Hardware Support For Interval Computations And For Soft Computing: Theorems
, 1994
"... This paper provides a rationale for providing hardware supported functions of more than two variables for processing incomplete knowledge and fuzzy knowledge. The result is in contrast to Kolmogorov's theorem in numerical (nonfuzzy) case. ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
This paper provides a rationale for providing hardware supported functions of more than two variables for processing incomplete knowledge and fuzzy knowledge. The result is in contrast to Kolmogorov's theorem in numerical (nonfuzzy) case.
Fast Algorithms for Computing Statistics under Interval Uncertainty, with Applications to Computer Science and to Electrical and Computer Engineering
, 2007
"... Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Computing statistics is important. In many engineering applications, we are interested in computing statistics. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. For each of these characteristics C, there is an expression C(x1,..., xn) that enables us to provide an estimate for C based on the observed values x1,..., xn. For example: a reasonable statistic for estimating the mean value of a probability distribution is the population average E(x1,..., xn) = 1 n · (x1 +... + xn); a reasonable statistic for estimating the variance V is the population variance V (x1,..., xn) = 1 n · n∑
An example of Lfuzzy Join Space
 Rend. Circ. Mat. Palermo
, 2001
"... On a generalized deMorgan lattice (X, _<, V, A,') we introduce a family of join hyperoperations *p, parametrized by a parameter p X. As a result we obtain a family of join spaces (X, ,p). We show that: for every a, b X the family {a*p b}pex can be considered as the pcuts of a Lfuzzy set a * b; ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
On a generalized deMorgan lattice (X, _<, V, A,') we introduce a family of join hyperoperations *p, parametrized by a parameter p X. As a result we obtain a family of join spaces (X, ,p). We show that: for every a, b X the family {a*p b}pex can be considered as the pcuts of a Lfuzzy set a * b; in this manner we synthesize a Lfuzzy hyperoperation which takes pairs from X to Lfuzzy subsets of X. We then show that (X, ,p) is a Lfuzzy hypergroup (in the sense of Corsini) and can be considered as a Lfuzzy join space. Furthermore, a * b is a Lfuzzy interval for all a, b X.
The Strength of Statistical Evidence for Composite Hypotheses: Inference to the Best Explanation
, 2010
"... A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors o ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likelihood ratio using the parameter value consistent with each hypothesis that maximizes the likelihood function over the parameter of interest. Since the weight of evidence is generally only known up to a nuisance parameter, it is approximated by replacing the likelihood function with a reduced likelihood function on the interest parameter space. Unlike the Bayes factor and unlike the pvalue under interpretations that extend its scope, the weight of evidence is coherent in the sense that it cannot support a hypothesis over any hypothesis that it entails. Further, when comparing the hypothesis that the parameter lies outside a nontrivial interval to the hypothesis that it lies within the interval, the proposed method of weighing evidence almost always asymptotically favors the correct hypothesis
Towards Foundations of Processing Imprecise Data: From Traditional Statistical Techniques of Processing Crisp Data to Statistical Processing of Fuzzy Data
 Proceedings of the International Conference on Fuzzy Information Processing: Theories and Applications FIP’2003
, 2002
"... In traditional statistics, we process crisp data  usually, results of measurements and/or observations. Not all the knowledge comes from measurements and observations. In many reallife situations, in addition to the results of measurements and observations, we have expert estimates, estimates tha ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
In traditional statistics, we process crisp data  usually, results of measurements and/or observations. Not all the knowledge comes from measurements and observations. In many reallife situations, in addition to the results of measurements and observations, we have expert estimates, estimates that are often formulated in terms of natural language, like "x is large". Before we analyze how to process these statements, we must be able to translate them in a language that a computer can understand. This translation of expert statements from natural language into a precise language of numbers is one of the main original objectives of fuzzy logic. It is therefore important to extend traditional statistical techniques from processing crisp data to processing fuzzy data. In this paper, we provide an overview of our related research. Keywords: statistical processing, interval data, fuzzy data, random sets, possibility theory, 1
Towards The Use of Aesthetics in Decision Making: Kolmogorov Complexity Formalizes Birkhoff's Idea
 Bulletin of the European Association for Theoretical Computer Science (EATCS
, 1998
"... Decision making is traditionally based on utilitarian criteria such as cost, efficiency, time, etc. These criteria are reasonably easy to formalize; hence, for such criteria, we can select the best decision by solving the corresponding welldefined optimization problem. In many engineering projects, ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Decision making is traditionally based on utilitarian criteria such as cost, efficiency, time, etc. These criteria are reasonably easy to formalize; hence, for such criteria, we can select the best decision by solving the corresponding welldefined optimization problem. In many engineering projects, however, e.g., in designing cars, building, airplanes, etc., an important additional criterion which needs to be satisfied is that the designed object should be good looking. This additional criterion is difficult to formalize and, because of that, it is rarely taken into consideration in formal decision making. In the 1930s, the famous mathematician G. D. Birkhoff has proposed a formula that described beauty in terms of "order" and "complexity". In the simplest cases, he formalized these notions and showed that his formula is indeed working. However, since there was no general notion of complexity, he was unable to formalize his idea in the general case. In this paper, we show that the exi...
Fuzzy (Granular) Levels of Quality, With Applications to Data Mining and to Structural Integrity of Aerospace Structures
, 2000
"... Experts usually describe quality by using words from natural language such as "perfect", "good", etc. In this paper, we deduce natural numerical values corresponding to these words, and show that these values explain empirical dependencies uncovered in data mining and in the analysis of structural i ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Experts usually describe quality by using words from natural language such as "perfect", "good", etc. In this paper, we deduce natural numerical values corresponding to these words, and show that these values explain empirical dependencies uncovered in data mining and in the analysis of structural integrity of aerospace structures. 1. Formulation of the Problem In mathematical descriptions, quality is characterized by a numerical value of an appropriately chosen objective function. In reallife, however, to describe quality, we use words such as "perfect", "good", etc. We therefore need to relate numerical values with words describing quality. ffl If we already have a numerical value, then fuzzy logic provides us with a reasonable technique for translating this numerical value into words. ffl Often, we face the opposite problem: we have an expert's estimate of quality in terms of words, and we must translate this estimate into numbers so that we will be able to combine this qualit...
Cooperative Learning is Better: Explanation Using Dynamical Systems, Fuzzy Logic, and Geometric Symmetries
, 1998
"... this paper, we will consider optimality criteria on the set \Phi of all families. ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
this paper, we will consider optimality criteria on the set \Phi of all families.
Towards Optimal Placement of BioWeapon Detectors
"... Abstract—Biological weapons are difficult and expensive to detect. Within a limited budget, we can afford a limited number of bioweapon detector stations. It is therefore important to find the optimal locations for such stations. A natural idea is to place more detectors in the areas with more popu ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—Biological weapons are difficult and expensive to detect. Within a limited budget, we can afford a limited number of bioweapon detector stations. It is therefore important to find the optimal locations for such stations. A natural idea is to place more detectors in the areas with more population – and fewer in desert areas, with fewer people. However, such a commonsense analysis does not tell us how many detectors to place where. To decide on the exact placement of bioweapon detectors, we formulate the placement problem in precise terms, and come up with an (almost) explicit solution to the resulting optimization problem. Formulation of the practical problem. Biological weapons are difficult and expensive to detect. Within a limited budget, we can afford a limited number of bioweapon detector stations. It is therefore important to find the optimal locations