Results 1  10
of
35
Modularity Aspects of Disjunctive Stable Models
, 2007
"... Practically all programming languages used in software engineering allow to split a program into several modules. For fully declarative and nonmonotonic logic programming languages, however, the modular structure of programs is hard to realise, since the output of an entire program cannot in general ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
Practically all programming languages used in software engineering allow to split a program into several modules. For fully declarative and nonmonotonic logic programming languages, however, the modular structure of programs is hard to realise, since the output of an entire program cannot in general be composed from the output of its component programs in a direct manner. In this paper, we consider these aspects for the stablemodel semantics of disjunctive logic programs (DLPs). We define the notion of a DLPfunction, where a welldefined input/output interface is provided, and establish a novel module theorem enabling a suitable compositional semantics for modules. The module theorem extends the wellknown splittingset theorem and allows also a generalisation of a shifting technique for splitting shared disjunctive rules among components.
Facts do not Cease to Exist Because They are Ignored: Relativised Uniform Equivalence with AnswerSet Projection
 In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI 2007
, 2007
"... Recent research in answerset programming (ASP) focuses on different notions of equivalence between programs which are relevant for program optimisation and modular programming. Prominent among these notions is uniform equivalence, which checks whether two programs have the same semantics when joine ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
(Show Context)
Recent research in answerset programming (ASP) focuses on different notions of equivalence between programs which are relevant for program optimisation and modular programming. Prominent among these notions is uniform equivalence, which checks whether two programs have the same semantics when joined with an arbitrary set of facts. In this paper, we study a family of more finegrained versions of uniform equivalence, where the alphabet of the added facts as well as the projection of answer sets is taken into account. The latter feature, in particular, allows the removal of auxiliary atoms in computation, which is important for practical programming aspects. We introduce novel semantic characterisations for the equivalence problems under consideration and analyse the computational complexity for checking these problems. We furthermore provide efficient reductions to quantified propositional logic, yielding a rapidprototyping system for equivalence checking.
Replacements in nonground answerset programming
 In Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR
, 2006
"... ..."
(Show Context)
A common view on strong, uniform, and other notions of equivalence in answerset programming. Theory and Practice of Logic Programming
"... Logic programming under the answerset semantics nowadays deals with numerous different notions of program equivalence. This is due to the fact that equivalence for substitution (known as strong equivalence) and ordinary equivalence are different concepts. The former holds, given programs P and Q, i ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Logic programming under the answerset semantics nowadays deals with numerous different notions of program equivalence. This is due to the fact that equivalence for substitution (known as strong equivalence) and ordinary equivalence are different concepts. The former holds, given programs P and Q, iff P can be faithfully replaced by Q within any context R, while the latter holds iff P and Q provide the same output, that is, they have the same answer sets. Notions in between strong and ordinary equivalence have been introduced as theoretical tools to compare incomplete programs and are defined by either restricting the syntactic structure of the considered context programs R or by bounding the set A of atoms allowed to occur in R (relativized equivalence). For the latter approach, different A yield properly different equivalence notions, in general. For the former approach, however, it turned out that any “reasonable ” syntactic restriction to R coincides with either ordinary, strong, or uniform equivalence (for uniform equivalence, the context ranges over arbitrary sets of facts, rather than program rules). In this paper, we propose a parameterization for equivalence notions which takes care of both such kinds of restrictions simultaneously by bounding, on the one hand, the atoms which are allowed to occur in the rule heads of the context and, on the other hand, the atoms which are allowed to occur in the rule bodies of the context. We introduce a general semantical characterization which includes known ones as SEmodels (for strong equivalence) or UEmodels (for uniform equivalence) as special cases. Moreover, we provide complexity bounds for the problem in question and sketch a possible implementation method making use of dedicated systems for checking ordinary equivalence. KEYWORDS: Answerset programming, strong equivalence, relativized equivalence.
Logic Programming for Knowledge Representation
, 2007
"... This note provides background information and references to the tutorial on recent research developments in logic programming inspired by need of knowledge representation. ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This note provides background information and references to the tutorial on recent research developments in logic programming inspired by need of knowledge representation.
A Solver for QBFs in Negation Normal Form
"... Various problems in artificial intelligence can be solved by translating them into a quantified boolean formula (QBF) and evaluating the resulting encoding. In this approach, a QBF solver is used as a black box in a rapid implementation of a more general reasoning system. Most of the current solvers ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Various problems in artificial intelligence can be solved by translating them into a quantified boolean formula (QBF) and evaluating the resulting encoding. In this approach, a QBF solver is used as a black box in a rapid implementation of a more general reasoning system. Most of the current solvers for QBFs require formulas in prenex conjunctive normal form as input, which makes a further translation necessary, since the encodings are usually not in a specific normal form. This additional step increases the number of variables in the formula or disrupts the formula’s structure. Moreover, the most important part of this transformation, prenexing, is not deterministic. In this paper, we focus on an alternative way to process QBFs without these drawbacks and describe a solver, qpro, which is able to handle arbitrary formulas. To this end, we extend algorithms for QBFs to the nonnormal form case and compare qpro with the leading normal form provers on several problems from the area of artificial intelligence. We prove properties of the algorithms generalized to nonclausal form by using a novel approach based on a sequentstyle formulation of the calculus. 1.
S.: Belief revision of logic programs under answer set semantics
 In: KR’08, AAAI
, 2008
"... We address the problem of belief revision in (nonmonotonic) logic programming under answer set semantics: given logic programs P and Q, the goal is to determine a program R that corresponds to the revision of P by Q, denoted P ∗ Q. Unlike previous approaches in logic programming, our formal techniqu ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We address the problem of belief revision in (nonmonotonic) logic programming under answer set semantics: given logic programs P and Q, the goal is to determine a program R that corresponds to the revision of P by Q, denoted P ∗ Q. Unlike previous approaches in logic programming, our formal techniques are analogous to those of distancebased belief revision in propositional logic. In developing our results, we build upon the model theory of logic programs furnished by SE models. Since SE models provide a formal, monotonic chacterisation of logic programs, we can adapt wellknown techniques from the area of belief revision to revision in logic programs. We investigate two specific operators: (logic program) expansion and a revision operator based on the distance between the SE models of logic programs. It proves to be the case that expansion is an interesting operator in its own right, unlike in classical AGMstyle belief revision where it is relatively uninteresting. Expansion and revision are shown to satisfy a suite of interesting properties; in particular, our revision operators satisfy the majority of the AGM postulates for revision. A complexity analysis reveals that our revision operators do not increase the complexity of the base formalism. As a consequence, we present an encoding for computing the revision of a logic program by another, within the same logic programming framework.
Hyperequivalence of logic programs with respect to supported models
 PROCEEDINGS OF AAAI 2008
, 2008
"... Recent research in nonmonotonic logic programming has focused on certain types of program equivalence, which we refer to here as hyperequivalence, that are relevant for program optimization and modular programming. So far, most results concern hyperequivalence relative to the stablemodel semantics. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Recent research in nonmonotonic logic programming has focused on certain types of program equivalence, which we refer to here as hyperequivalence, that are relevant for program optimization and modular programming. So far, most results concern hyperequivalence relative to the stablemodel semantics. However, other semantics for logic programs are also of interest, especially the semantics of supported models which, when properly generalized, is closely related to the autoepistemic logic of Moore. In this paper, we consider a family of hyperequivalence relations for programs based on the semantics of supported and supported minimal models. We characterize these relations in modeltheoretic terms. We use the characterizations to derive complexity results concerning testing whether two programs are hyperequivalent relative to supported and supported minimal models.
Equivalences in answerset programming by countermodels in the logic of hereandthere
 of Lecture Notes in Computer Science
, 2008
"... Abstract. Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in AnswerSet Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in AnswerSet Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of HereandThere (HT). For uniform equivalence however, correct characterizations in terms of HTmodels can only be obtained for finite theories, respectively programs. In this article, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HTmodels and countermodels (socalled equivalence interpretations). Moreover, we generalize the socalled notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to firstorder theories under a very general semantics given in terms of a quantified version of HT. We thus obtain a general framework for the study of various notions of equivalence for theories under answerset semantics. Moreover, we prove an expedient property that allows for a simplified
Elimination of Disjunction and Negation in AnswerSet Programs under Hyperequivalence ⋆
"... Abstract. The study of different notions of equivalence is one of the cornerstones of current research in answerset programming. This is mainly motivated by the needs of program simplification and modular programming, for which ordinary equivalence is insufficient. A recently introduced equivalence ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The study of different notions of equivalence is one of the cornerstones of current research in answerset programming. This is mainly motivated by the needs of program simplification and modular programming, for which ordinary equivalence is insufficient. A recently introduced equivalence notion in this context is hyperequivalence, which includes as special cases strong, uniform, and ordinary equivalence. We study in this paper the question of replacing programs by syntactically simpler ones preserving hyperequivalence (we refer to such a replacement as a casting). In particular, we provide necessary and sufficient semantic conditions under which the elimination of disjunction, negation, or both, in programs is possible, preserving hyperequivalence. In other words, we characterise in modeltheoretic terms when a disjunctive logic program can be replaced by a hyperequivalent normal, positive, or Horn program, respectively. Furthermore, we study the computational complexity of the considered tasks and, based on similar results for strong equivalence developed in previous work, we provide methods for constructing the respective hyperequivalent programs. Our results contribute to the understanding of problem settings in logic programming in the sense that they show in which scenarios the usage of certain constructs are superfluous or not. 1