Results 1  10
of
11
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
Probabilistic Logic under Coherence: Complexity and Algorithms
 In Proceedings ISIPTA01
, 2001
"... We study probabilistic logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore the relationship between coherencebased and classical modeltheoretic probabilistic logic. Interestingly, we show that the notions of gcoherence and of gcoherent entailment can be expre ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
We study probabilistic logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore the relationship between coherencebased and classical modeltheoretic probabilistic logic. Interestingly, we show that the notions of gcoherence and of gcoherent entailment can be expressed by combining notions in modeltheoretic probabilistic logic with concepts from default reasoning. Using these results, we analyze the computational complexity of probabilistic reasoning under coherence. Moreover, we present new algorithms for deciding gcoherence and for computing tight gcoherent intervals, which reduce these tasks to standard reasoning tasks in modeltheoretic probabilistic logic. Thus, efficient techniques for modeltheoretic probabilistic reasoning can immediately be applied for probabilistic reasoning under coherence, for example, column generation techniques. We then describe two other interesting techniques for efficient modeltheoretic probabilistic reasoning in the conjunctive case.
Weak nonmonotonic probabilistic logics
"... Towards probabilistic formalisms for resolving local inconsistencies under modeltheoretic probabilistic entailment, we present probabilistic generalizations of Pearl’s entailment in System Z and Lehmann’s lexicographic entailment. We then analyze the nonmonotonic and semantic properties of the new ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Towards probabilistic formalisms for resolving local inconsistencies under modeltheoretic probabilistic entailment, we present probabilistic generalizations of Pearl’s entailment in System Z and Lehmann’s lexicographic entailment. We then analyze the nonmonotonic and semantic properties of the new notions of entailment. In particular, we show that they satisfy the rationality postulates of System P and the property of Rational Monotonicity. Moreover, we show that modeltheoretic probabilistic entailment is stronger than the new notion of lexicographic entailment, which in turn is stronger than the new notion of entailment in System Z. As an important feature of the new notions of entailment in System Z and lexicographic entailment, we show that they coincide with modeltheoretic probabilistic entailment whenever there are no local inconsistencies. We also show that the new notions of entailment in System Z and lexicographic entailment are proper generalizations of their classical counterparts. Finally, we present algorithms for reasoning under the new formalisms, and we give a precise picture of its computational complexity.
Updating action domain descriptions
 in Proc. IJCAI
, 2005
"... How can an intelligent agent update her knowledge base about an action domain, relative to some conditions (possibly obtained from earlier observations)? We study this question in a formal framework for reasoning about actions and change, in which the meaning of an action domain description can be r ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
How can an intelligent agent update her knowledge base about an action domain, relative to some conditions (possibly obtained from earlier observations)? We study this question in a formal framework for reasoning about actions and change, in which the meaning of an action domain description can be represented by a directed graph whose nodes correspond to states and whose edges correspond to action occurrences. We define the update of an action domain description in this framework, and show among other results that a solution to this problem can be obtained by a divideandconquer approach in some cases. We also introduce methods to compute a solution and an approximate solution to this problem, and analyze the computational complexity of these problems. Finally, we discuss techniques to improve the quality of solutions. 1
Combining probabilistic logic programming with the power of maximum entropy
 ARTIF. INTELL
, 2004
"... This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the informationtheoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic progra ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the informationtheoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic programming under maximum entropy. The first one is based on the usual notion of entailment under maximum entropy, and is defined for the very general case of probabilistic logic programs over Boolean events. The second one is based on a new notion of entailment under maximum entropy, where the principle of maximum entropy is coupled with the closed world assumption (CWA) from classical logic programming. It is only defined for the more restricted case of probabilistic logic programs over conjunctive events. We then analyze the nonmonotonic behavior of both approaches along benchmark examples and along general properties for default reasoning from conditional knowledge bases. It turns out that both approaches have very nice nonmonotonic features. In particular, they realize some inheritance of probabilistic knowledge along subclass relationships, without suffering from the problem of inheritance blocking and from the drowning problem. They both also satisfy the property of rational monotonicity and several irrelevance properties. We finally present algorithms for both approaches, which are based on generalizations of techniques from probabilistic
Nonmonotonic Probabilistic Logics between ModelTheoretic Probabilistic Logic and Probabilistic Logic under Coherence
, 2002
"... Recently, it has been shown that probabilistic entailment under coherence is weaker than modeltheoretic probabilistic entailment. Moreover, probabilistic entailment under coherence is a generalization of default entailment in System P. In this paper, we continue this line of research by presenting ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Recently, it has been shown that probabilistic entailment under coherence is weaker than modeltheoretic probabilistic entailment. Moreover, probabilistic entailment under coherence is a generalization of default entailment in System P. In this paper, we continue this line of research by presenting probabilistic generalizations of more sophisticated notions of classical default entailment that lie between modeltheoretic probabilistic entailment and probabilistic entailment under coherence. That is, the new formalisms properly generalize their counterparts in classical default reasoning, they are weaker than modeltheoretic probabilistic entailment, and they are stronger than probabilistic entailment under coherence. The new formalisms are useful especially for handling probabilistic inconsistencies related to conditioning on zero events. They can also be applied for probabilistic belief revision. More generally, in the same spirit as a similar previous paper, this paper sheds light on exciting new formalisms for probabilistic reasoning beyond the wellknown standard ones.
On Stratified Belief Base Compilation
, 2004
"... In this paper, we investigate the extent to which knowledge compilation can be used to circumvent the complexity of skeptical inference from a stratified belief base (SBB). We first analyze the compilability of skeptical inference from an SBB, under various requirements concerning both the selection ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper, we investigate the extent to which knowledge compilation can be used to circumvent the complexity of skeptical inference from a stratified belief base (SBB). We first analyze the compilability of skeptical inference from an SBB, under various requirements concerning both the selection policy under consideration, the possibility to make the stratification vary at the online query answering stage and the expected complexity of inference from the compiled form. Not surprisingly, the results are mainly negative. However, since they concern the worst case situation only, they do not prevent a compilationbased approach from being practically useful for some families of instances. While many approaches to compile an SBB can be designed, we are primarily interested in those which take advantage of existing knowledge compilation techniques for classical inference. Specifically, we present a general framework for compiling SBBs into socalled Cnormal SBBs, where C is any tractable class for clausal entailment which is the target class of a compilation function. Another major advantage of the proposed approach lies in the flexibility of the Cnormal belief bases obtained, which means that changing the stratification does not require to recompile the SBB. For several families of compiled SBBs and several selection policies, the complexity of skeptical inference is identified. Some tractable restrictions are exhibited for each policy. Finally, some empirical results are presented.
Nonmonotonic probabilistic reasoning under variablestrength inheritance with overriding
 SYNTHESE
, 2005
"... We present new probabilistic generalizations of Pearl’s entailment in System ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We present new probabilistic generalizations of Pearl’s entailment in System
Extending the Maximum Entropy Approach to Variable Strength Defaults
 Annals of Mathematics and Artificial Intelligence
, 2001
"... Introduction The general requirements of default reasoning have mainly been laid down with the help of illustrative examples that demonstrate behaviours such as respect for specificity, inheritance to exceptional subclasses and maintenance of ambiguity. While there is consensus regarding the most b ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Introduction The general requirements of default reasoning have mainly been laid down with the help of illustrative examples that demonstrate behaviours such as respect for specificity, inheritance to exceptional subclasses and maintenance of ambiguity. While there is consensus regarding the most basic requirementspreferential reasoning [10] is accepted as core behaviour for nonmonotonic reasoning systems Corresponding author. 2 R. A. Bourne, S. Parsons / Extending the Maximum Entropy Approach [6,17]there is no general theory that provides a satisfactory formalisation of what underlies the default intuitions themselves. Although some default systems have captured the required behaviours, e.g., lexicographic entailment [11], there has been little objective justification of the reasons behind them. This paper aims to take a step towards the development of such a general theory by extending an existing approach [8] that does have a wellestablishe
Comparing action descriptions based on semantic preferences
 Annals of Mathematics and Artificial Intelligence
"... Abstract. Incorporating new information into a knowledge base is an important problem which has been widely considered. In this paper, we study the problem in a formal framework for reasoning about action and change, in which action domains are described in an action language that has a transitionb ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Incorporating new information into a knowledge base is an important problem which has been widely considered. In this paper, we study the problem in a formal framework for reasoning about action and change, in which action domains are described in an action language that has a transitionbased semantics. Going beyond previous works, we consider (i) a richer action language that allows for nondeterministic, and concurrent actions, as well as the representation of indirect effects and dependencies between fluents, (ii) more general updates than elementary statements, and, most importantly, (iii) metalevel knowledge, such as observations, assertions, or general domain properties that remain invariant under change, expressed in an action query language. For this setting, we formalize a notion of update of an action domain description, relative to a generic preference relation on action domain descriptions that selects most preferred solutions. We study semantic and computational aspects of this notion, where we establish basic properties of updates and a decomposition result that gives rise to a divide and conquer approach to computing solutions under certain conditions. Furthermore, we study the computational complexity of decision problems around computing solutions, both for the generic setting and for two particular preference relations, viz. setinclusion and weightbased preference. While deciding the existence of solutions