Results 1  10
of
29
Knowledge compilation and theory approximation
 Journal of the ACM
, 1996
"... Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often t ..."
Abstract

Cited by 185 (5 self)
 Add to MetaCart
Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about exactly what can and cannot be inferred from the knowledge base. We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a tractable form — allowing for efficient subsequent query answering. We show how propositional logical theories can be compiled into Horn theories that approximate the original information. The approximations bound the original theory from below and above in terms of logical strength. The procedures are extended to other tractable languages (for example, binary clauses) and to the firstorder case. Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general framebased language into a tractable form.
Knowledge Compilation Using Horn Approximations
 IN PROCEEDINGS OF AAAI91
, 1991
"... We present a new approach to developing fast and efficient knowledge representation systems. Previous approaches to the problem of tractable inference have used restricted languages or incomplete inference mechanisms  problems include lack of expressive power, lack of inferential power, and/or la ..."
Abstract

Cited by 124 (10 self)
 Add to MetaCart
We present a new approach to developing fast and efficient knowledge representation systems. Previous approaches to the problem of tractable inference have used restricted languages or incomplete inference mechanisms  problems include lack of expressive power, lack of inferential power, and/or lack of a formal characterization of what can and cannot be inferred. To overcome these disadvantages, we introduce a knowledge compilation method. We allow the user to enter statements in a general, unrestricted representation language, which the system compiles into a restricted language that allows for efficient inference. Since an exact translation into a tractable form is often impossible, the system searches for the best approximation of the original information. We will describe how the approximation can be used to speed up inference without giving up correctness or completeness. We illustrate our method by studying the approximation of logical theories by Horn theories. Following the ...
Computing Least Common Subsumers in Description Logics
 PROCEEDINGS OF THE 10TH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1992
"... Description logics are a popular formalism for knowledge representation and reasoning. This paper introduces a new operation for description logics: computing the "least common subsumer" of a pair of descriptions. This operation computes the largest set of commonalities between two descrip ..."
Abstract

Cited by 107 (14 self)
 Add to MetaCart
Description logics are a popular formalism for knowledge representation and reasoning. This paper introduces a new operation for description logics: computing the "least common subsumer" of a pair of descriptions. This operation computes the largest set of commonalities between two descriptions. After arguing for the usefulness of this operation, we analyze it by relating computation of the least common subsumer to the wellunderstood problem of testing subsumption; a close connection is shown in the restricted case of "structural subsumption". We also present a method for computing the least common subsumer of "attribute chain equalities", and analyze the tractability of computing the least common subsumer of a set of descriptionsan important operation in inductive learning.
A Survey on Complexity Results for Nonmonotonic Logics
 Journal of Logic Programming
, 1993
"... This paper surveys the main results appeared in the literature on the computational complexity of nonmonotonic inference tasks. We not only give results about the tractability/intractability of the individual problems but we also analyze sources of complexity and explain intuitively the nature of e ..."
Abstract

Cited by 91 (6 self)
 Add to MetaCart
(Show Context)
This paper surveys the main results appeared in the literature on the computational complexity of nonmonotonic inference tasks. We not only give results about the tractability/intractability of the individual problems but we also analyze sources of complexity and explain intuitively the nature of easy/hard cases. We focus mainly on nonmonotonic formalisms, like default logic, autoepistemic logic, circumscription, closedworld reasoning and abduction, whose relations with logic programming are clear and well studied. Complexity as well as recursiontheoretic results are surveyed. Work partially supported by the ESPRIT Basic Research Action COMPULOG and the Progetto Finalizzato Informatica of the CNR (Italian Research Council). The first author is supported by a CNR scholarship 1 Introduction Nonmonotonic logics and negation as failure in logic programming have been defined with the goal of providing formal tools for the representation of default information. One of the ideas und...
Disjunctive Deductive Databases
, 1994
"... Background material is presented on deductive and normal deductive databases. A historical review is presented of work in disjunctive deductive databases, starting from 1982. The semantics of alternative classes of disjunctive databases is reviewed with their model and fixpoint characterizations. Al ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
Background material is presented on deductive and normal deductive databases. A historical review is presented of work in disjunctive deductive databases, starting from 1982. The semantics of alternative classes of disjunctive databases is reviewed with their model and fixpoint characterizations. Algorithms are developed to compute answers to queries in the alternative theories using the concept of a model tree. Open problems in this area are discussed.
Logic and Databases: a 20 Year Retrospective
, 1996
"... . At a workshop held in Toulouse, France in 1977, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [131]). This was the first time that this designation was made. The impetus for this started approximately twenty years ago in 1976 when I visited Gallaire ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
. At a workshop held in Toulouse, France in 1977, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [131]). This was the first time that this designation was made. The impetus for this started approximately twenty years ago in 1976 when I visited Gallaire and Nicolas in Toulouse, France, which culminated in a workshop held in Toulouse, France in 1977. It is appropriate, then to provide an assessment as to what has been achieved in the twenty years since the field started as a distinct discipline. In this retrospective I shall review developments that have taken place in the field, assess the contributions that have been made, consider the status of implementations of deductive databases and discuss the future of work in this area. 1 Introduction As described in [234], the use of logic and deduction in databases started in the late 1960s. Prominent among the developments was the work by Levien and Maron [202, 203, 199, 200, 201] and Kuhns [1...
Is Intractability of NonMonotonic Reasoning a Real Drawback?
 Artificial Intelligence
, 1996
"... Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harde ..."
Abstract

Cited by 48 (9 self)
 Add to MetaCart
(Show Context)
Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harder. In this paper we show that, to some extent, NMR fulfills the representation goal. In particular, we prove that nonmonotonic formalisms such as circumscription and default logic allow for a much more compact and natural representation of propositional knowledge than propositional calculus. Proofs are based on a suitable definition of compilable inference problem, and on nonuniform complexity classes. Some results about intractability of circumscription and default logic can therefore be interpreted as the price one has to pay for having such an extracompact representation. On the other hand, intractability of inference and compactness of representation are not equivalent notions: we ex...
Semantical and Computational Aspects of Horn Approximations
 In Proceedings of the International Joint Conference of Artificial Intelligence
, 1993
"... In a recent study Selman and Kautz proposed a method, called Horn approximation, for speeding up inference in propositional Knowledge Bases. Their technique is based on the compilation of a propositional formula into a pair of Horn formulae: a Horn Greatest Lower Bound (GLB) and a Horn Least Upper ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
In a recent study Selman and Kautz proposed a method, called Horn approximation, for speeding up inference in propositional Knowledge Bases. Their technique is based on the compilation of a propositional formula into a pair of Horn formulae: a Horn Greatest Lower Bound (GLB) and a Horn Least Upper Bound (LUB). In this paper we address two questions that have been only marginally addressed so far: 1) what is the semantics of the Horn approximations ? 2) what is the exact complexity of finding Horn approximations? We obtain semantical as well as computational results. The major results of the former kind are: Horn GLBs are closely related to models of the circumscription; reasoning wrt the Horn LUB can be mapped into classical reasoning. The major results of the latter kind are: finding a Horn GLB is "mildly" harder than solving the original inference problem; finding the Horn LUB is a search problem that cannot be parallelized. We believe that our results provide useful criteria that m...
An Overview of Nonmonotonic Reasoning and Logic Programming
 Journal of Logic Programming, Special Issue
, 1993
"... The focus of this paper is nonmonotonic reasoning as it relates to logic programming. I discuss the prehistory of nonmonotonic reasoning starting from approximately 1958. I then review the research that has been accomplished in the areas of circumscription, default theory, modal theories and logic ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
The focus of this paper is nonmonotonic reasoning as it relates to logic programming. I discuss the prehistory of nonmonotonic reasoning starting from approximately 1958. I then review the research that has been accomplished in the areas of circumscription, default theory, modal theories and logic programming. The overview includes the major results developed including complexity results that are known about the various theories. I then provide a summary which includes an assessment of the field and what must be done to further research in nonmonotonic reasoning and logic programming. 1 Introduction Classical logic has played a major role in computer science. It has been an important tool both for the development of architecture and of software. Logicians have contended that reasoning, as performed by humans, is also amenable to analysis using classical logic. However, workers in the field of artificial 1 This paper is an updated version of an invited Banquet Address, First Interna...
Learning Default Concepts
 In Proceedings of the Tenth Canadian Conference on Artificial Intelligence (CSCSI94
, 1994
"... Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learn ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
(Show Context)
Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such default concepts from observational data. We first model the underlying performance task  classifying incomplete examples  as a probabilistic process that passes random test examples through a "blocker" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. After surveying the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, and investigating their relative merits, we present a more dataefficient learning technique, developed from wellknown statistical principles. Finally, we extend Valiant's pac learning framework to ...