Results 1  10
of
190
Using Temporal Logics to Express Search Control Knowledge for Planning
 ARTIFICIAL INTELLIGENCE
, 1999
"... Over the years increasingly sophisticated planning algorithms have been developed. These have made for more efficient planners, but unfortunately these planners still suffer from combinatorial complexity even in simple domains. Theoretical results demonstrate that planning is in the worst case in ..."
Abstract

Cited by 297 (14 self)
 Add to MetaCart
Over the years increasingly sophisticated planning algorithms have been developed. These have made for more efficient planners, but unfortunately these planners still suffer from combinatorial complexity even in simple domains. Theoretical results demonstrate that planning is in the worst case intractable. Nevertheless, planning in particular domains can often be made tractable by utilizing additional domain structure. In fact, it has long been acknowledged that domain independent planners need domain dependent information to help them plan effectively. In this
Toward a Logic for Qualitative Decision Theory
 In Proceedings of the KR'94
, 1992
"... We present a logic for representing and reasoning with qualitative statements of preference and normality and describe how these may interact in decision making under uncertainty. Our aim is to develop a logical calculus that employs the basic elements of classical decision theory, namely proba ..."
Abstract

Cited by 207 (4 self)
 Add to MetaCart
We present a logic for representing and reasoning with qualitative statements of preference and normality and describe how these may interact in decision making under uncertainty. Our aim is to develop a logical calculus that employs the basic elements of classical decision theory, namely probabilities, utilities and actions, but exploits qualitative information about these elements directly for the derivation of goals. Preferences and judgements of normality are captured in a modal/conditional logic, and a simple model of action is incorporated. Without quantitative information, decision criteria other than maximum expected utility are pursued. We describe how techniques for conditional default reasoning can be used to complete information about both preferences and normality judgements, and we show how maximin and maximax strategies can be expressed in our logic.
Temporal Interpretation, Discourse Relations and Common Sense Entailment
, 1993
"... This paper presents a formal account of how to determine the discourse relations between sentences in a text, and the relations between the events they describe. The distinct natural interpretations of texts with similar syntax are explained in terms of defeasible rules. These characterise the ef ..."
Abstract

Cited by 139 (20 self)
 Add to MetaCart
This paper presents a formal account of how to determine the discourse relations between sentences in a text, and the relations between the events they describe. The distinct natural interpretations of texts with similar syntax are explained in terms of defeasible rules. These characterise the effects of causal knowledge and knowledge of language use on interpretation. Patterns of defeasible entailment that are supported by the logic in which the theory is expressed are shown to underly temporal interpretation. 1 The Problem of Temporal Relations An essential part of text interpretation involves calculating the relations between the events described. But sentential syntax and compositional semantics alone don't provide the basis for doing this. The sentences in (1) and (2) have the same syntax, and so using compositional semantics one would predict that the events stand in similar temporal relations. (1) Max stood up. John greeted him. (2) Max fell. John pushed him. But in (1...
Model Checking vs. Theorem Proving: A Manifesto
, 1991
"... We argue that rather than representing an agent's knowledge as a collection of formulas, and then doing theorem proving to see if a given formula follows from an agent's knowledge base, it may be more useful to represent this knowledge by a semantic model, and then do model checking to se ..."
Abstract

Cited by 125 (5 self)
 Add to MetaCart
We argue that rather than representing an agent's knowledge as a collection of formulas, and then doing theorem proving to see if a given formula follows from an agent's knowledge base, it may be more useful to represent this knowledge by a semantic model, and then do model checking to see if the given formula is true in that model. We discuss how to construct a model that represents an agent's knowledge in a number of different contexts, and then consider how to approach the modelchecking problem.
A KnowledgeBased Approach to Planning with Incomplete Information and Sensing
, 2002
"... In this paper we present a new approach to the problem of planning with incomplete information and sensing. Our approach is based on a higher level, "knowledgebased," representation of the planner's knowledge and of the domain actions. In particular, in our approach we use a set of f ..."
Abstract

Cited by 92 (8 self)
 Add to MetaCart
In this paper we present a new approach to the problem of planning with incomplete information and sensing. Our approach is based on a higher level, "knowledgebased," representation of the planner's knowledge and of the domain actions. In particular, in our approach we use a set of formulae from a firstorder modal logic of knowledge to represent the planner's incomplete knowledge state. Actions are then represented as updates to this collection of formulae. Hence, actions are being modelled in terms of how they modify the knowledge state of the planner rather than in terms of how they modify the physical world. We have constructed a planner to utilize this representation and we use it to show that on many common problems this more abstract representation is perfectly adequate for solving the planning problem, and that in fact it scales better and supports features that make it applicable to much richer domains and problems.
The Logic of Knowledge Bases
, 2000
"... Recently Lakemeyer and Levesque proposed the logic, which amalgamates both the situation calculus and Levesque’s logic of only knowing. While very expressive the practical relevance of the formalism is unclear because it heavily relies on secondorder logic. In this paper we demonstrate that the pic ..."
Abstract

Cited by 91 (8 self)
 Add to MetaCart
Recently Lakemeyer and Levesque proposed the logic, which amalgamates both the situation calculus and Levesque’s logic of only knowing. While very expressive the practical relevance of the formalism is unclear because it heavily relies on secondorder logic. In this paper we demonstrate that the picture is not as bleak as it may seem. In particular, we show that for large classes of knowledge bases and queries, including epistemic ones, query evaluation requires firstorder reasoning only. We also provide a simple semantic definition of progressing a knowledge base. For a particular class of knowledge bases, adapted from earlier results by Lin and Reiter, we show that progression is firstorder representable and easy to compute. 1
Sound and efficient closedworld reasoning for planning
 Artificial Intelligence
, 1997
"... Closedworld inference is the process of determining that a logical sentence is false based on its absence from a knowledge base, or the inability to derive it. This process is essential for planning with incomplete information. We describe a novel method for closedworld inference and update over t ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
Closedworld inference is the process of determining that a logical sentence is false based on its absence from a knowledge base, or the inability to derive it. This process is essential for planning with incomplete information. We describe a novel method for closedworld inference and update over the firstorder theories of action used by planning algorithms such as NONLIN, TWEAK, and UCPOP. We show the method to be sound and efficient, but incomplete. In our experiments, closedworld inference consistently averaged about 2 milliseconds, while updates averaged approximately 1.2 milliseconds. We incorporated the method into the XII planner, which supports our Internet Softbot (software robot). The method cut the number of actions executed by the Softbot bya factor of one hundred, and resulted in a corresponding speedup to XII. 1
Minimal Belief and Negation as Failure
 Artificial Intelligence
, 1994
"... Fangzhen Lin and Yoav Shoham defined a propositional nonmonotonic logic which uses two independent modal operators. One of them represents minimal knowledge, the other is related to the ideas of justification (as understood in default logic) and of negation as failure. We describe a simplified versi ..."
Abstract

Cited by 73 (5 self)
 Add to MetaCart
Fangzhen Lin and Yoav Shoham defined a propositional nonmonotonic logic which uses two independent modal operators. One of them represents minimal knowledge, the other is related to the ideas of justification (as understood in default logic) and of negation as failure. We describe a simplified version of that system, show how quantifiers can be included in it, and study its relation to circumscription and default logic, to logic programming, and to the theory of epistemic queries developed by Hector Levesque and Ray Reiter. 1 Introduction Lin and Shoham [16] defined a propositional nonmonotonic logic which uses two independent modal operators. One of them represents minimal knowledge, 1 the other is related to the ideas of justification (as understood in default logic) and of negation as failure. In this paper, we consider a special case of that system, in which Kripke structures of a particularly simple kind are used, and show how quantifiers can be included in it. This extension i...
Negation As Failure In The Head
, 1998
"... The class of logic programs with negation as failure in the head is a subset of the logic of MBNF introduced by Lifschitz and is an extension of the class of extended disjunctive programs. An interesting feature of such programs is that the minimality of answer sets does not hold. This paper conside ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
The class of logic programs with negation as failure in the head is a subset of the logic of MBNF introduced by Lifschitz and is an extension of the class of extended disjunctive programs. An interesting feature of such programs is that the minimality of answer sets does not hold. This paper considers the class of {\em general extended disjunctive programs\/} (GEDPs) as logic programs with negation as failure in the head. First, we discuss that the class of GEDPs is useful for representing knowledge in various domains in which the principle of minimality is too strong. In particular, the class of abductive programs is properly included in the class of GEDPs. Other applications include the representation of inclusive disjunctions and circumscription with fixed predicates. Secondly, the semantic nature of GEDPs is analyzed by the syntax of programs. In acyclic programs, negation as failure in the head can be shifted to the body without changing the answer sets of the program. On the other hand, supported sets of any program are always preserved by the same transformation. Thirdly, the computational complexity of the class of GEDPs is shown to remain in the same complexity class as normal disjunctive programs. Through the simulation of negation as failure in the head, computation of answer sets and supported sets is realized using any proof procedure for extended or positive disjunctive programs. Finally, a simple translation of GEDPs into autoepistemic logic is presented.
Downward Refinement and the Efficiency of Hierarchical Problem Solving
 Artificial Intelligence
, 1993
"... Analysis and experiments have shown that hierarchical problemsolving is most effective when the hierarchy satisfies the downward refinement property (DRP), whereby every abstract solution can be refined to a concretelevel solution without backtracking across abstraction levels. However, the DRP i ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
Analysis and experiments have shown that hierarchical problemsolving is most effective when the hierarchy satisfies the downward refinement property (DRP), whereby every abstract solution can be refined to a concretelevel solution without backtracking across abstraction levels. However, the DRP is a strong requirement that is not often met in practice. In this paper we examine the case when the DRP fails, and provide an analytical model of search complexity parameterized by the probability of an abstract solution being refinable. Our model provides a more accurate picture of the effectiveness of hierarchical problemsolving. We then formalize the DRP in Abstripsstyle hierarchies, providing a syntactic test that can be applied to determine if a hierarchy satisfies the DRP. Finally, we describe an algorithm called Highpoint that we have developed. This algorithm builds on the Alpine algorithm of Knoblock in that it automatically generates abstraction hierarchies. However, it uses th...