Results 1  10
of
57
Toward a Logic for Qualitative Decision Theory
 In Proceedings of the KR'94
, 1992
"... We present a logic for representing and reasoning with qualitative statements of preference and normality and describe how these may interact in decision making under uncertainty. Our aim is to develop a logical calculus that employs the basic elements of classical decision theory, namely proba ..."
Abstract

Cited by 196 (4 self)
 Add to MetaCart
We present a logic for representing and reasoning with qualitative statements of preference and normality and describe how these may interact in decision making under uncertainty. Our aim is to develop a logical calculus that employs the basic elements of classical decision theory, namely probabilities, utilities and actions, but exploits qualitative information about these elements directly for the derivation of goals. Preferences and judgements of normality are captured in a modal/conditional logic, and a simple model of action is incorporated. Without quantitative information, decision criteria other than maximum expected utility are pursued. We describe how techniques for conditional default reasoning can be used to complete information about both preferences and normality judgements, and we show how maximin and maximax strategies can be expressed in our logic.
System Z: a natural ordering of defaults with tractable applications to default reasoning
, 1990
"... Recent progress towards unifying the probabilistic and preferential models semantics for nonmonotonic reasoning has led to a remarkable observation: Any consistent system of default rules imposes an unambiguous and natural ordering on these rules which, to emphasize its simple and basic character, ..."
Abstract

Cited by 166 (0 self)
 Add to MetaCart
Recent progress towards unifying the probabilistic and preferential models semantics for nonmonotonic reasoning has led to a remarkable observation: Any consistent system of default rules imposes an unambiguous and natural ordering on these rules which, to emphasize its simple and basic character, we term "Zordering. " This ordering can be used with various levels of refinement, to prioritize conflicting arguments, to rank the degree of abnormality of states of the world, and to define plausible consequence relationships. This paper defines the Zordering, briefly mentions its semantical origins, and iUustrates two simple entailment relationships induced by the ordering. Two extensions are then described, maximumentropy and conditional entailment, which trade in computational simplicity for semantic refinements. 1. Description We begin with a set of rules R = {r: %. ~ 6,} where % and [~r are propositional formulas over a finite alphabet of literals, ando denotes a new connective to be given default interpretations later on. A truth valuation of the fiterals in the language will be called a model. A model M is said to verify a rule ot ~ ifM ~ot ^ [3(i.e., o~and ~ are both true in M), and to falsify ot ~ ~ifM ~A ~ 13. Given a set R of such rules, we first define the relation of toleration.
Another perspective on Default Reasoning
 Ann. Math. Artif. Intell
, 1992
"... The lexicographic closure of any given finite set D of normal defaults is defined. A conditional assertion a b is in this lexicographic closure if, given the defaults D and the fact a, one would conclude b. The lexicographic closure is essentially a rational extension of D, and of its rational ..."
Abstract

Cited by 80 (0 self)
 Add to MetaCart
The lexicographic closure of any given finite set D of normal defaults is defined. A conditional assertion a b is in this lexicographic closure if, given the defaults D and the fact a, one would conclude b. The lexicographic closure is essentially a rational extension of D, and of its rational closure, defined in a previous paper. It provides a logic of normal defaults that is different from the one proposed by R. Reiter and that is rich enough not to require the consideration of nonnormal defaults. A large number of examples are provided to show that the lexicographic closure corresponds to the basic intuitions behind Reiter's logic of defaults. 1 Plan of this paper Section 2 is a general introduction, describing the goal of this paper, in relation with Reiter's Default Logic and the program proposed in [12] by Lehmann and Magidor. Section 3 first discusses at length some general principles of the logic of defaults, with many examples, and, then, puts this paper in perspe...
Plausibility Measures and Default Reasoning
 Journal of the ACM
, 1996
"... this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, fflsemantics, possibilistic structures, and rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. W ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, fflsemantics, possibilistic structures, and rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature. Categories and Subject Descriptors: F.4.1 [Mathematical Logic and Formal Languages]:
Some syntactic approaches to the handling of inconsistent knowledge bases: A comparative study  Part 1: The flat case
"... This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A socalled argued consequence relation, taking into account the existence of consistent arguments in favour of a conclusion and the absence of consistent arguments in favour of its contrary, is partic ..."
Abstract

Cited by 71 (12 self)
 Add to MetaCart
This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A socalled argued consequence relation, taking into account the existence of consistent arguments in favour of a conclusion and the absence of consistent arguments in favour of its contrary, is particularly investigated. Flat knowledge bases, i.e., without any priority between their elements, are studied under different inconsistencytolerant consequence relations, namely the socalled argumentative, free, universal, existential, cardinalitybased, and paraconsistent consequence relations. The syntaxsensitivity of these consequence relations is studied. A companion paper is devoted to the case where priorities exist between the pieces of information in the knowledge base. Key words: inconsistency, argumentation, nonmonotonic reasoning, syntaxsensitivity. * Some of the results contained in this paper were presented at the Ninth Conference on Uncertainty in Artificial Intelligence (UAI'...
Random Worlds and Maximum Entropy
 In Proc. 7th IEEE Symp. on Logic in Computer Science
, 1994
"... Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can conside ..."
Abstract

Cited by 49 (12 self)
 Add to MetaCart
Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or firstorder models, with domain f1; : : : ; Ng that satisfy KB , and compute the fraction of them in which ' is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximumentropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are...
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
A Computational Theory of Decision Networks
 International Journal of Approximate Reasoning
, 1994
"... This paper is about how to represent and solve decision problems in Bayesian decision theory (e.g. [6]). A general representation named decision networks is proposed based on influence diagrams [10]. This new representation incorporates the idea, from Markov decision process (e.g. [5]), that a decis ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
This paper is about how to represent and solve decision problems in Bayesian decision theory (e.g. [6]). A general representation named decision networks is proposed based on influence diagrams [10]. This new representation incorporates the idea, from Markov decision process (e.g. [5]), that a decision may be conditionally independent of certain pieces of available information. It also allows multiple cooperative agents and facilitates the exploitation of separability in the utility function. Decision networks inherit the advantages of both influence diagrams and Markov decision processes, which makes them a better representation framework for decision analysis, planning under uncertainty, medical diagnosis and treatment.
Modeling Belief in Dynamic Systems. Part II: Revision and Update
 Journal of A.I. Research
, 1999
"... The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This fr ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update [Katsuno and Mendelzon 1991a] depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revi...