Results 1  10
of
59
On the Hardness of Approximate Reasoning
, 1996
"... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..."
Abstract

Cited by 219 (13 self)
 Add to MetaCart
Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to modelcounting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the e...
A Survey on Knowledge Compilation
, 1998
"... this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nonmonotonic knowledge bases. Keywor ..."
Abstract

Cited by 96 (3 self)
 Add to MetaCart
this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nonmonotonic knowledge bases. Keywords: Knowledge Representation, Efficiency of Reasoning
Learning to reason
 Journal of the ACM
, 1994
"... Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the ..."
Abstract

Cited by 56 (24 self)
 Add to MetaCart
Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in which it can interact with this interface and construct a representation KB of the world W. The reasoning performance is measured only after this period, when the agent is presented with queries � from some query language, relevant to the world, and has to answer whether W implies �. The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the “world”. Since the agent interacts with the world when constructing its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. We show how previous results from learning theory and reasoning fit into this framework and
The comparative linguistics of knowledge representation
 In Proc. of IJCAI’95
, 1995
"... We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness, " that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base representatio ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness, " that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base representation: propositional logic, default logic, circumscription, and model preference defaults; and, at a lower level, Horn formulas, characteristic models, decision trees, disjunctive normal form, and conjunctive normal form. We also show that adding new variables improves the effective expressibility of certain knowledge representation formalisms. 1
Propositional Independence: FormulaVariable Independence and Forgetting
 Journal of Artificial Intelligence Research
, 2003
"... Independence { the study of what is relevant to a given problem of reasoning { has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, ..."
Abstract

Cited by 54 (9 self)
 Add to MetaCart
Independence { the study of what is relevant to a given problem of reasoning { has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, while the syntactic form of independence is computationally easy to check, there are cases in which things that intuitively are not relevant are not recognized as such. We also consider the problem of forgetting, i.e., distilling from a knowledge base only the part that is relevant to the set of queries constructed from a subset of the alphabet. While such process is computationally hard, it allows for a simpli  cation of subsequent reasoning, and can thus be viewed as a form of compilation: once the relevant part of a knowledge base has been extracted, all reasoning tasks to be performed can be simpli ed.
Is Intractability of NonMonotonic Reasoning a Real Drawback?
 Artificial Intelligence
, 1996
"... Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harde ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harder. In this paper we show that, to some extent, NMR fulfills the representation goal. In particular, we prove that nonmonotonic formalisms such as circumscription and default logic allow for a much more compact and natural representation of propositional knowledge than propositional calculus. Proofs are based on a suitable definition of compilable inference problem, and on nonuniform complexity classes. Some results about intractability of circumscription and default logic can therefore be interpreted as the price one has to pay for having such an extracompact representation. On the other hand, intractability of inference and compactness of representation are not equivalent notions: we ex...
Horn Approximations of Empirical Data
 Artificial Intelligence
, 1995
"... Formal AI systems traditionally represent knowledge using logical formulas. Sometimes, however, a modelbased representation is more compact and enables faster reasoning than the corresponding formulabased representation. The central idea behind our work is to represent a large set of models by a s ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
Formal AI systems traditionally represent knowledge using logical formulas. Sometimes, however, a modelbased representation is more compact and enables faster reasoning than the corresponding formulabased representation. The central idea behind our work is to represent a large set of models by a subset of characteristic models. More specifically, we examine modelbased representations of Horn theories, and show that there are large Horn theories that can be exactly represented by an exponentially smaller set of characteristic models. We show that deduction based on a set of characteristic models requires only polynomial time, as it does using Horn theories. More surprisingly, abduction can be performed in polynomial time using a set of characteristic models, whereas abduction using Horn theories is NPcomplete. Finally, we discuss algorithms for generating efficient representations of the Horn theory that best approximates a general set of models. 1 Introduction Logical formulas are...
On Computing All Abductive Explanations
 INSTITUT FUR INFORMATIONSSYSTEME
, 2002
"... We consider the computation of all respectively a polynomial subset of the explanations of an abductive query from a Horn theory, and pay particular attention to whether the query is a positive or negative letter, the explanation is based on literals from an assumption set, and the Horn theory is ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
We consider the computation of all respectively a polynomial subset of the explanations of an abductive query from a Horn theory, and pay particular attention to whether the query is a positive or negative letter, the explanation is based on literals from an assumption set, and the Horn theory is represented in terms of formulas or characteristic models. We derive
Learning to Reason with a Restricted View
, 1998
"... The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning p ..."
Abstract

Cited by 29 (15 self)
 Add to MetaCart
The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning problems that are intractable when studied separately become tractable when performed as a task of Learning to Reason. In this paper we study Learning to Reason problems where the interaction with the world supplies the learner only partial information in the form of partial assignments. Several natural interpretations of partial assignments are considered and learning and reasoning algorithms using these are developed. The results presented exhibit a tradeoff between learnability, the strength of the oracles used in the interface, and the range of reasoning queries the learner is guaranteed to answer correctly.
Space Efficiency of Propositional Knowledge Representation Formalisms
 IN PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON THE PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING (KR'96
, 2000
"... We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge #, is the size of the shortest formula of F that represents #. In this paper we assume that knowledge is ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge #, is the size of the shortest formula of F that represents #. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms ...