Results 1 
7 of
7
Defeasible Logic
 Handbook of Logic in Artificial Intelligence and Logic Programming
, 2001
"... We often reach conclusions partially on the basis that we do not have evidence that the conclusion is false. A newspaper story warning that the local water supply has been contaminated would prevent a person from drinking water from the tap in her home. This suggests that the absence of such evidenc ..."
Abstract

Cited by 219 (4 self)
 Add to MetaCart
(Show Context)
We often reach conclusions partially on the basis that we do not have evidence that the conclusion is false. A newspaper story warning that the local water supply has been contaminated would prevent a person from drinking water from the tap in her home. This suggests that the absence of such evidence contributes to her usual belief that her water is safe. On the other hand, if a reasonable person received a letter telling her that she had won a million dollars, she would consciously consider whether there was any evidence that the letter was a hoax or somehow misleading before making plans to spend the money. All to often we arrive at conclusions which we later retract when contrary evidence becomes available. The contrary evidence defeats our earlier reasoning. Much of our reasoning is defeasible in this way. Since around 1980, considerable research in AI has focused on how to model reasoning of this sort. In this paper, I describe one theoretical approach to this problem, discuss implementation of this approach as an extension of Prolog, and describe some application of this work to normative reasoning, learning, planning, and other types of automated reasoning.
Random Worlds and Maximum Entropy
 In Proc. 7th IEEE Symp. on Logic in Computer Science
, 1994
"... Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can co ..."
Abstract

Cited by 56 (13 self)
 Add to MetaCart
Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or firstorder models, with domain f1; : : : ; Ng that satisfy KB , and compute the fraction of them in which ' is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximumentropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are...
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 48 (8 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
The Effect of Knowledge on Belief: Conditioning, Specificity and the Lottery Paradox in Default Reasoning
 Artificial Intelligence
, 1993
"... How should what one knows about an individual affect default conclusions about that individual? This paper contrasts two views of "knowledge" in default reasoning systems. The first is the traditional view that one knows the logical consequences of one's knowledge base. It is shown ho ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
How should what one knows about an individual affect default conclusions about that individual? This paper contrasts two views of "knowledge" in default reasoning systems. The first is the traditional view that one knows the logical consequences of one's knowledge base. It is shown how, under this interpretation, having to know an exception is too strong for default reasoning. It is argued that we need to distinguish "background" and "contingent" knowledge in order to be able to handle specificity, and that this is a natural distinction. The second view of knowledge is what is contingently known about the world under consideration. Using this view of knowledge, a notion of conditioning that seems like a minimal property of a default is defined. Finally, a qualitative version of the lottery paradox is given; if we want to be able to say that individuals that are typical in every respect do not exist, we should not expect to conclude the conjunction of our default conclusions. This paper...
Lp, A Logic for Representing and Reasoning with Statistical Knowledge
, 1990
"... This paper presents a logical formalism for representing and reasoning with statistical knowledge. One of the key features of the formalism is its ability to deal with qualitative statistical information. It is argued that statistical knowledge, especially that of a qualitative nature, is an importa ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper presents a logical formalism for representing and reasoning with statistical knowledge. One of the key features of the formalism is its ability to deal with qualitative statistical information. It is argued that statistical knowledge, especially that of a qualitative nature, is an important component of our world knowledge and that such knowledge is used in many different reasoning tasks. The work is further motivated by the observation that previous formalisms for representing probabilistic information are inadequate for representing statistical knowledge. The representation mechanism takes the form of a logic that is capable of representing a wide variety of statistical knowledge, and that possesses an intuitive formal semantics based on the simple notions of sets of objects and probabilities defined over those sets. Furthermore, a proof theory is developed and is shown to be sound and complete. The formalism offers a perspicuous and powerful representational tool for stat...
Evaluating Defaults
, 2002
"... We seek to find normative criteria of adequacy for nonmonotonic logic similar to the criterion of validity for deductive logic. Rather than stipulating that the conclusion of an inference be true in all models in which the premises are true, we require that the conclusion of a nonmonotonic inference ..."
Abstract
 Add to MetaCart
We seek to find normative criteria of adequacy for nonmonotonic logic similar to the criterion of validity for deductive logic. Rather than stipulating that the conclusion of an inference be true in all models in which the premises are true, we require that the conclusion of a nonmonotonic inference be true in “almost all ” models of a certain sort in which the premises are true. This “certain sort ” specification picks out the models that are relevant to the inference, taking into account factors such as specificity and vagueness, and previous inferences. The frequencies characterizing the relevant models reflect known frequencies in our actual world. The criteria of adequacy for a default inference can be extended by thresholding to criteria of adequacy for an extension. We show that this avoids the implausibilities that might otherwise result from the chaining of default inferences. The model proportions, when construed in terms of frequencies, provide a verifiable grounding of default rules, and can become the basis for generating default rules from statistics.