Results 1  10
of
91
On the Hardness of Approximate Reasoning
, 1996
"... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..."
Abstract

Cited by 216 (13 self)
 Add to MetaCart
Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to modelcounting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the e...
Knowledge compilation and theory approximation
 Journal of the ACM
, 1996
"... Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often t ..."
Abstract

Cited by 157 (5 self)
 Add to MetaCart
Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about exactly what can and cannot be inferred from the knowledge base. We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a tractable form — allowing for efficient subsequent query answering. We show how propositional logical theories can be compiled into Horn theories that approximate the original information. The approximations bound the original theory from below and above in terms of logical strength. The procedures are extended to other tractable languages (for example, binary clauses) and to the firstorder case. Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general framebased language into a tractable form.
A Survey on Knowledge Compilation
, 1998
"... this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nonmonotonic knowledge bases. Keywor ..."
Abstract

Cited by 95 (3 self)
 Add to MetaCart
this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nonmonotonic knowledge bases. Keywords: Knowledge Representation, Efficiency of Reasoning
Tractable Reasoning via Approximation
 Artificial Intelligence
, 1995
"... Problems in logic are wellknown to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantical ..."
Abstract

Cited by 92 (0 self)
 Add to MetaCart
Problems in logic are wellknown to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantically wellfounded logic for approximate reasoning, which is justifiable from the intuitive point of view, and to provide fast algorithms for dealing with it even when using expressive languages. We also want our logic to be useful to perform approximate reasoning in different contexts. We define a method for the approximation of decision reasoning problems based on multivalued logics. Our work expands and generalizes in several directions ideas presented by other researchers. The major features of our technique are: 1) approximate answers give semantically clear information about the problem at hand; 2) approximate answers are easier to compute than answers to the original problem; 3) approxim...
Defaultreasoning with models
"... Reasoning with modelbased representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formulabased representations of knowledge. In this paper we present more evidence to the utility of such representatio ..."
Abstract

Cited by 78 (18 self)
 Add to MetaCart
Reasoning with modelbased representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formulabased representations of knowledge. In this paper we present more evidence to the utility of such representations. In real life situations, one normally completes a lot of missing "context" information when answering queries. We model this situation by augmenting the available knowledge about the world with contextspecific information; we show that reasoning with modelbased representations can be done efficiently in the presence of varying context information. We then consider the task of default reasoning. We show that default reasoning is a generalization of reasoning within context, in which the reasoner has many "context" rules, which may be conflicting. We characterize the cases in which modelbased reasoning supports efficient default reasoning and develop algorithms that handle efficiently fragments of Reiter's default logic. In particular, this includes cases in which performing the default reasoning task with the traditional, formulabased, representation is intractable. Further, we argue that these results support an incremental view of reasoning in a natural way.
Structure Identification in Relational Data
, 1997
"... This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is give ..."
Abstract

Cited by 74 (2 self)
 Add to MetaCart
This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is given a precise formal definition similar to that of learnability. Using this framework, we then explore if a tractable procedure exists for deciding whether a given relation is decomposable into a constraint network or a CNF theory with desirable topology and, if the answer is positive, identifying the desired decomposition. Finally, we
Learning to reason
 Journal of the ACM
, 1994
"... Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the ..."
Abstract

Cited by 56 (24 self)
 Add to MetaCart
Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in which it can interact with this interface and construct a representation KB of the world W. The reasoning performance is measured only after this period, when the agent is presented with queries � from some query language, relevant to the world, and has to answer whether W implies �. The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the “world”. Since the agent interacts with the world when constructing its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. We show how previous results from learning theory and reasoning fit into this framework and
The comparative linguistics of knowledge representation
 In Proc. of IJCAI’95
, 1995
"... We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness, " that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base representatio ..."
Abstract

Cited by 55 (2 self)
 Add to MetaCart
We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness, " that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base representation: propositional logic, default logic, circumscription, and model preference defaults; and, at a lower level, Horn formulas, characteristic models, decision trees, disjunctive normal form, and conjunctive normal form. We also show that adding new variables improves the effective expressibility of certain knowledge representation formalisms. 1
Forming Concepts for Fast Inference
 In Proceedings of the Tenth National Conference on Artificial Intelligence (AAAI92
, 1992
"... Knowledge compilation speeds inference by creating tractable approximations of a knowledge base, but this advantage is lost if the approximations are too large. We show how learning concept generalizations can allow for a more compact representation of the tractable theory. We also give a general in ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
Knowledge compilation speeds inference by creating tractable approximations of a knowledge base, but this advantage is lost if the approximations are too large. We show how learning concept generalizations can allow for a more compact representation of the tractable theory. We also give a general induction rule for generating such concept generalizations. Finally, we prove that unless NP ` nonuniform P, not all theories have small Horn least upperbound approximations. 1 Introduction Work in machine learning has traditionally been divided into two main camps: concept learning (e.g. [ Kearns, 1990 ] ) and speedup learning (e.g. [ Minton, 1988 ] ). The work reported in this paper bridges these two areas by showing how concept learning can be used to speed up inference by allowing a more compact and efficient representation of a knowledge base. We have been studying techniques for boosting the performance of knowledge representation systems by compiling expressive but intractable repre...
Is Intractability of NonMonotonic Reasoning a Real Drawback?
 Artificial Intelligence
, 1996
"... Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harde ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harder. In this paper we show that, to some extent, NMR fulfills the representation goal. In particular, we prove that nonmonotonic formalisms such as circumscription and default logic allow for a much more compact and natural representation of propositional knowledge than propositional calculus. Proofs are based on a suitable definition of compilable inference problem, and on nonuniform complexity classes. Some results about intractability of circumscription and default logic can therefore be interpreted as the price one has to pay for having such an extracompact representation. On the other hand, intractability of inference and compactness of representation are not equivalent notions: we ex...