Results 1  10
of
50
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
A Counterexample to Theorems of Cox and Fine
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative condition ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.
Qualitative decision theory: from Savage’s axioms to nonmonotonic reasoning
 Journal of the ACM
, 2002
"... Abstract: This paper investigates to what extent a purely symbolic approach to decision making under uncertainty is possible, in the scope of Artificial Intelligence. Contrary to classical approaches to decision theory, we try to rank acts without resorting to any numerical representation of utility ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Abstract: This paper investigates to what extent a purely symbolic approach to decision making under uncertainty is possible, in the scope of Artificial Intelligence. Contrary to classical approaches to decision theory, we try to rank acts without resorting to any numerical representation of utility nor uncertainty, and without using any scale on which both uncertainty and preference could be mapped. Our approach is a variant of Savage's where the setting is finite, and the strict preference on acts is a partial order. It is shown that although many axioms of Savage theory are preserved and despite the intuitive appeal of the ordinal method for constructing a preference over acts, the approach is inconsistent with a probabilistic representation of uncertainty. The latter leads to the kind of paradoxes encountered in the theory of voting. It is shown that the assumption of ordinal invariance enforces a qualitative decision procedure that presupposes a comparative possibility representation of uncertainty, originally due to Lewis, and usual in nonmonotonic reasoning. Our axiomatic investigation thus provides decisiontheoretic foundations to preferential inference of Lehmann and colleagues. However, the obtained decision rules are sometimes either not very decisive or may lead to overconfident decisions, although their basic principles look sound. This paper points out some limitations of purely ordinal approaches to Savagelike decision making under uncertainty, in perfect analogy with similar difficulties in voting theory.
Nonmonotonic Logics and Semantics
 Journal of Logic and Computation
, 2001
"... Tarski gave a general semantics for deductive reasoning: a formula a may be deduced from a set A of formulas i a holds in all models in which each of the elements of A holds. A more liberal semantics has been considered: a formula a may be deduced from a set A of formulas i a holds in all of th ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
Tarski gave a general semantics for deductive reasoning: a formula a may be deduced from a set A of formulas i a holds in all models in which each of the elements of A holds. A more liberal semantics has been considered: a formula a may be deduced from a set A of formulas i a holds in all of the preferred models in which all the elements of A hold. Shoham proposed that the notion of preferred models be de ned by a partial ordering on the models of the underlying language. A more general semantics is described in this paper, based on a set of natural properties of choice functions. This semantics is here shown to be equivalent to a semantics based on comparing the relative importance of sets of models, by what amounts to a qualitative probability measure. The consequence operations de ned by the equivalent semantics are then characterized by a weakening of Tarski's properties in which the monotonicity requirement is replaced by three weaker conditions. Classical propositional connectives are characterized by natural introductionelimination rules in a nonmonotonic setting. Even in the nonmonotonic setting, one obtains classical propositional logic, thus showing that monotonicity is not required to justify classical propositional connectives.
Modeling Belief in Dynamic Systems. Part II: Revision and Update
 Journal of A.I. Research
, 1999
"... The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This fr ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update [Katsuno and Mendelzon 1991a] depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revi...
Plausibility Measures: A User's Guide
 In Proc. Eleventh Conference on Uncertainty in Artificial Intelligence (UAI '95
, 1995
"... We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probab ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a plausibility measure makes it easy for us to add structure on an "as needed" basis, letting us examine what is required to ensure that a plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their "algebraic properties", analogues to the use of + and \Theta in probability theory. An understanding of such properties ...
Modeling Belief in Dynamic Systems. Part I: Foundations
 Artificial Intelligence
, 1997
"... Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining appropriate semantic models. In this paper we propose a general framework in which to model belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes OE if he knows that OE is more plausible than :OE. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagi...
Default Reasoning from Conditional Knowledge Bases: Complexity and Tractable Cases
 Artif. Intell
, 2000
"... Conditional knowledge bases have been proposed as belief bases that include defeasible rules (also called defaults) of the form " ! ", which informally read as "generally, if then ." Such rules may have exceptions, which can be handled in different ways. A number of entailment semantics for condi ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
Conditional knowledge bases have been proposed as belief bases that include defeasible rules (also called defaults) of the form " ! ", which informally read as "generally, if then ." Such rules may have exceptions, which can be handled in different ways. A number of entailment semantics for conditional knowledge bases have been proposed in the literature. However, while the semantic properties and interrelationships of these formalisms are quite well understood, about their computational properties only partial results are known so far. In this paper, we fill these gaps and first draw a precise picture of the complexity of default reasoning from conditional knowledge bases: Given a conditional knowledge base KB and a default ! , does KB entail ! ? We classify the complexity of this problem for a number of wellknown approaches (including Goldszmidt et al.'s maximum entropy approach and Geffner's conditional entailment), where we consider the general propositional case as well as natural syntactic restrictions (in particular, to Horn and literalHorn conditional knowledge bases). As we show, the more sophisticated semantics for conditional knowledge bases are plagued with intractability in all these fragments. We thus explore cases in which these semantics are tractable, and find that most of them enjoy this property on feedbackfree Horn conditional knowledge bases, which constitute a new, meaningful class of conditional knowledge bases. Furthermore, we generalize previous tractability results from Horn to qHorn conditional knowledge bases, which allow for a limited use of disjunction. Our results complement and extend previous results, and contribute in refining the tractability/intractability frontier of default reasoning from conditional know...
A Modal Logic for Subjective Default Reasoning
, 1994
"... In this paper we introduce DML: Default Modal Logic. DML is a logic endowed with a twoplace modal connective that has the intended meaning of "If ff, then normally fi". On top of providing a welldefined tool for analyzing common default reasoning, DML allows nesting of the default operator. We ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
In this paper we introduce DML: Default Modal Logic. DML is a logic endowed with a twoplace modal connective that has the intended meaning of "If ff, then normally fi". On top of providing a welldefined tool for analyzing common default reasoning, DML allows nesting of the default operator. We present a semantic framework in which many of the known default proof systems can be naturally characterized, and prove soundness and completeness theorems for several such proof systems. Our semantics is a "neighbourhood modal semantics", and it allows for subjective defaults, that is, defaults may vary among different worlds within the same model. The semantics has an appealing intuitive interpretation and may be viewed as a settheoretic generalization of the probabilistic interpretations of default reasoning. We show that our semantics is most general in the sense that any modal semantics that is sound for some basic axioms for default reasoning is a special case of our semantics. Such a generality result may serve to provide a semantical analysis of the relative strength of different proof systems and to show the nonexistence of semantics with certain properties. 2 1