Results 1  10
of
64
Managing Uncertainty and Vagueness in Description Logics for the Semantic Web
, 2007
"... Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently direct ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently directed towards representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web. In this paper, we give an overview of approaches in this context to managing probabilistic uncertainty, possibilistic uncertainty, and vagueness in expressive description logics for the Semantic Web.
Random Worlds and Maximum Entropy
 In Proc. 7th IEEE Symp. on Logic in Computer Science
, 1994
"... Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can conside ..."
Abstract

Cited by 49 (12 self)
 Add to MetaCart
Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or firstorder models, with domain f1; : : : ; Ng that satisfy KB , and compute the fraction of them in which ' is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximumentropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are...
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
Generation of random sequences by human subjects: A critical survey of the literature
 Psychological Bulletin
, 1972
"... The subjective concept of randomness is used in many areas of psychological research to explain a variety of experimental results. One method to study randomness is to have subjects generate random series. Unfortunately, few results of the experiments that used this method lend themselves to compari ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
The subjective concept of randomness is used in many areas of psychological research to explain a variety of experimental results. One method to study randomness is to have subjects generate random series. Unfortunately, few results of the experiments that used this method lend themselves to comparison and synthesis because the investigators employed such a variety of experimental conditions and definitions of mathematical randomness. Some suggestions for future research are made. In many different fields of psychological research, the concepts of "subjective chance" and "subjective randomness " have been used almost exclusively to account for unexpected results. Characteristic of subjective chance is that it is not equal to mathematical chance; subjects seem to expect dependencies between successive events in spite of the fact that they know that the events occur independently of each other. Early in this century, psychophysics became interested in this phenomenon or the fact that successive responses of a subject are mutually dependent. In the psychophysical setting, the usual procedure is that a binary choice is made. Particularly experienced subjects are well aware that they are supposed to choose the alternatives in a random order. Even so, the subjective chance phenomenon persists. Hence, one possible explanation of interdependency of responses is that the subject has his own idea of what a random sequence looks like. More recent research on subjective probability, probability learning, and gambling behavior also revealed that successive responses of a subject were mutually dependent in experimental settings where independent responses were expected. Once again the subjective concept of randomness was mentioned as an explanation. In experiments on telepathy the concept was used to account for too many correct predictions of serial events. Clinical psychologists
From Statistics to Beliefs
, 1992
"... An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief to all basic "situations " consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific statistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier theories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising characterizations: there are connections to the principle of maximum entropy, a principle of maximal independence, an...
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
A Counterexample to Theorems of Cox and Fine
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative condition ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.
Defeasible reasoning with variable degrees of justification
 Artificial Intelligence
, 2002
"... The question addressed in this paper is how the degree of justification of a belief is determined. A conclusion may be supported by several different arguments, the arguments typically being defeasible, and there may also be arguments of varying strengths for defeaters for some of the supporting arg ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
The question addressed in this paper is how the degree of justification of a belief is determined. A conclusion may be supported by several different arguments, the arguments typically being defeasible, and there may also be arguments of varying strengths for defeaters for some of the supporting arguments. What is sought is a way of computing the “on sum ” degree of justification of a conclusion in terms of the degrees of justification of all relevant premises and the strengths of all relevant reasons. I have in the past defended various principles pertaining to this problem. In this paper I reaffirm some of those principles but propose a significantly different final analysis. Specifically, I endorse the weakest link principle for the computation of argument strengths. According to this principle the degree of justification an argument confers on its conclusion in the absence of other relevant arguments is the minimum of the degrees of justification of its premises and the strengths of the reasons employed in the argument. I reaffirm my earlier rejection of the accrual of reasons, according to which two arguments for a conclusion can result in a higher degree of justification than either argument by itself. This paper diverges from my earlier theory mainly in its treatment of defeaters.
Weak nonmonotonic probabilistic logics
"... Towards probabilistic formalisms for resolving local inconsistencies under modeltheoretic probabilistic entailment, we present probabilistic generalizations of Pearl’s entailment in System Z and Lehmann’s lexicographic entailment. We then analyze the nonmonotonic and semantic properties of the new ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Towards probabilistic formalisms for resolving local inconsistencies under modeltheoretic probabilistic entailment, we present probabilistic generalizations of Pearl’s entailment in System Z and Lehmann’s lexicographic entailment. We then analyze the nonmonotonic and semantic properties of the new notions of entailment. In particular, we show that they satisfy the rationality postulates of System P and the property of Rational Monotonicity. Moreover, we show that modeltheoretic probabilistic entailment is stronger than the new notion of lexicographic entailment, which in turn is stronger than the new notion of entailment in System Z. As an important feature of the new notions of entailment in System Z and lexicographic entailment, we show that they coincide with modeltheoretic probabilistic entailment whenever there are no local inconsistencies. We also show that the new notions of entailment in System Z and lexicographic entailment are proper generalizations of their classical counterparts. Finally, we present algorithms for reasoning under the new formalisms, and we give a precise picture of its computational complexity.
H.: Causal discovery via MML
 In: Proceedings of the Thirteenth International Conference on Machine Learning
, 1996
"... Automating the learning of causal models from sample data is a key step toward incorporating machine learning into decisionmaking and reasoning under uncertainty. This paper presents a Bayesian approach to the discovery of causal models, using a Minimum Message Length (MML) method. We have developed ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
Automating the learning of causal models from sample data is a key step toward incorporating machine learning into decisionmaking and reasoning under uncertainty. This paper presents a Bayesian approach to the discovery of causal models, using a Minimum Message Length (MML) method. We have developed encoding and search methods for discovering linear causal models. The initial experimental results presented in this paper show that the MML induction approach can recover causal models from generated data which are quite accurate re ections of the original models and compare favorably with those of TETRAD II (Spirtes et al. 1994) even when it is supplied with prior temporal information and MML is not. 1