Results 1  10
of
28
Defaultreasoning with models
"... Reasoning with modelbased representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formulabased representations of knowledge. In this paper we present more evidence to the utility of such representatio ..."
Abstract

Cited by 79 (18 self)
 Add to MetaCart
Reasoning with modelbased representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formulabased representations of knowledge. In this paper we present more evidence to the utility of such representations. In real life situations, one normally completes a lot of missing "context" information when answering queries. We model this situation by augmenting the available knowledge about the world with contextspecific information; we show that reasoning with modelbased representations can be done efficiently in the presence of varying context information. We then consider the task of default reasoning. We show that default reasoning is a generalization of reasoning within context, in which the reasoner has many "context" rules, which may be conflicting. We characterize the cases in which modelbased reasoning supports efficient default reasoning and develop algorithms that handle efficiently fragments of Reiter's default logic. In particular, this includes cases in which performing the default reasoning task with the traditional, formulabased, representation is intractable. Further, we argue that these results support an incremental view of reasoning in a natural way.
Learning to reason
 Journal of the ACM
, 1994
"... Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the ..."
Abstract

Cited by 57 (24 self)
 Add to MetaCart
Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in which it can interact with this interface and construct a representation KB of the world W. The reasoning performance is measured only after this period, when the agent is presented with queries � from some query language, relevant to the world, and has to answer whether W implies �. The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the “world”. Since the agent interacts with the world when constructing its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. We show how previous results from learning theory and reasoning fit into this framework and
Propositional Independence: FormulaVariable Independence and Forgetting
 Journal of Artificial Intelligence Research
, 2003
"... Independence { the study of what is relevant to a given problem of reasoning { has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
Independence { the study of what is relevant to a given problem of reasoning { has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, while the syntactic form of independence is computationally easy to check, there are cases in which things that intuitively are not relevant are not recognized as such. We also consider the problem of forgetting, i.e., distilling from a knowledge base only the part that is relevant to the set of queries constructed from a subset of the alphabet. While such process is computationally hard, it allows for a simpli  cation of subsequent reasoning, and can thus be viewed as a form of compilation: once the relevant part of a knowledge base has been extracted, all reasoning tasks to be performed can be simpli ed.
Reasoning with Characteristic Models
 In Proceedings of the National Conference on Artificial Intelligence
, 1993
"... Formal AI systems traditionally represent knowledge using logical formulas. ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Formal AI systems traditionally represent knowledge using logical formulas.
Learning to Reason with a Restricted View
, 1998
"... The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning p ..."
Abstract

Cited by 31 (15 self)
 Add to MetaCart
The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning problems that are intractable when studied separately become tractable when performed as a task of Learning to Reason. In this paper we study Learning to Reason problems where the interaction with the world supplies the learner only partial information in the form of partial assignments. Several natural interpretations of partial assignments are considered and learning and reasoning algorithms using these are developed. The results presented exhibit a tradeoff between learnability, the strength of the oracles used in the interface, and the range of reasoning queries the learner is guaranteed to answer correctly.
Space Efficiency of Propositional Knowledge Representation Formalisms
 IN PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON THE PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING (KR'96
, 2000
"... We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge #, is the size of the shortest formula of F that represents #. In this paper we assume that knowledge is ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge #, is the size of the shortest formula of F that represents #. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms ...
Translating between Horn Representations and their Characteristic Models
 JOURNAL OF AI RESEARCH
, 1995
"... Characteristic models are an alternative, model based, representation for Horn expressions. It has been shown that these two representations are incomparable and each has its advantages over the other. It is therefore natural to ask what is the cost of translating, back and forth, between these r ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Characteristic models are an alternative, model based, representation for Horn expressions. It has been shown that these two representations are incomparable and each has its advantages over the other. It is therefore natural to ask what is the cost of translating, back and forth, between these representations. Interestingly, the same translation questions arise in database theory, where it has applications to the design of relational databases. We study the complexity of these problems and prove some positive and negative results. Our main result is that the two translation problems are equivalent under polynomial reductions, and that they are equivalent to the corresponding decision problem. Namely, translating is equivalent to deciding whether a given set a models is the set of characteristic models for a given Horn expression. We also relate these problems to translating between the CNF and DNF representations of monotone functions, a well known problem for which no pol...
On Horn axiomatizations for sequential data
 Computer Science
, 2005
"... Abstract. We propose a notion of deterministic association rules for ordered data. We prove that our proposed rules can be formally justified by a purely logical characterization, namely, a natural notion of empirical Horn approximation for ordered data which involves background Horn conditions; the ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
Abstract. We propose a notion of deterministic association rules for ordered data. We prove that our proposed rules can be formally justified by a purely logical characterization, namely, a natural notion of empirical Horn approximation for ordered data which involves background Horn conditions; these ensure the consistency of the propositional theory obtained with the ordered context. The main proof resorts to a concept lattice model in the framework of Formal Concept Analysis, but adapted to ordered contexts. We also discuss a general method to mine these rules that can be easily incorporated into any algorithm for mining closed sequences, of which there are already some in the literature. 1
Reasoning with Examples: Propositional Formulae and Database Dependencies
"... For humans, looking at how concrete examples behave is an intuitive way of deriving conclusions. The drawback with this method is that it does not necessarily give the correct results. However, under certain conditions examplebased deduction can be used to obtain a correct and complete inference pr ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
For humans, looking at how concrete examples behave is an intuitive way of deriving conclusions. The drawback with this method is that it does not necessarily give the correct results. However, under certain conditions examplebased deduction can be used to obtain a correct and complete inference procedure. This is the case for Boolean formulae (reasoning with models) and for certain types of database integrity constraints (the use of Armstrong relations). We show that these approaches are closely related, and use the relationship to prove new results about the existence and sizes of Armstrong relations for Boolean dependencies. Furthermore, we exhibit close relations between the questions of finding keys in relational databases and that of finding abductive explanations. Further applications of the correspondence between these two approaches are also discussed. 1 Introduction One of the major tasks in database systems as well as artificial intelligence systems is to express some know...
Learning Horn Expressions with LogAnH
, 2000
"... The paper introduces LogAnH  a system for learning firstorder functionfree Horn expressions from interpretations. The system is based on an interactive algorithm (that asks questions) that was proved correct in previous work. The current paper shows how the algorithm can be implemented i ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
The paper introduces LogAnH  a system for learning firstorder functionfree Horn expressions from interpretations. The system is based on an interactive algorithm (that asks questions) that was proved correct in previous work. The current paper shows how the algorithm can be implemented in a practical system by reducing some inefficiencies.