Results 11  20
of
122
ON REPRESENTATIONAL ISSUES ABOUT COMBINATIONS OF CLASSICAL THEORIES WITH NONMONOTONIC RULES
, 2006
"... In the context of current efforts around SemanticWeb languages, the combination of classical theories in classical firstorder logic (and in particular of ontologies in various description logics) with rule languages rooted in logic programming is receiving considerable attention. Existing approach ..."
Abstract

Cited by 22 (13 self)
 Add to MetaCart
In the context of current efforts around SemanticWeb languages, the combination of classical theories in classical firstorder logic (and in particular of ontologies in various description logics) with rule languages rooted in logic programming is receiving considerable attention. Existing approaches such as SWRL, dlprograms, and DL+log, differ significantly in the way ontologies interact with (nonmonotonic) rules bases. In this paper, we identify fundamental representational issues which need to be addressed by such combinations and formulate a number of formal principles which help to characterize and classify existing and possible future approaches to the combination of rules and classical theories. We use the formal principles to explicate the underlying assumptions of current approaches. Finally, we propose a number of settings, based on our analysis of the representational issues and the fundamental principles underlying current approaches.
Learning Logical Exceptions In Chess
, 1994
"... This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overchallenge stateof theart learning algorithms. The tasks comprised learning rules to distinguish (1) illegal positions and (2) legal positions won optimally in a fixed number of moves. From our experimental results with task (1) the bestperforming algorithm was selected and a number of improvements were made. The principal extension to this generalisation method was to alter its representation from classical logic to a nonmonotonic formalism. A novel algorithm was developed in this framework to implement rule specialisation, relying on the invention of new predicates. When experimentally tested this combined approach did not at first deliver the expected performance gains due to restrictio...
Further improvement on integrity constraint checking for stratifiable deductive databases
 In Proceedings of the 22nd VLDB Conference
, 1996
"... Integrity constraint checking for stratifiable deductive databases has been studied by many authors. However, most of these methods may perform unnecessary checking if the update is irrelevant to the constraints. [Lee941 proposed a set called relevant set which can be incorporated in these works to ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Integrity constraint checking for stratifiable deductive databases has been studied by many authors. However, most of these methods may perform unnecessary checking if the update is irrelevant to the constraints. [Lee941 proposed a set called relevant set which can be incorporated in these works to reduce unnecessary checking. [Lee941 adopts a topdown approach and makes use of constants and evaluable functions in the constraints and deductive rules to reduce the search space. In this paper, we further extend this idea to make use of relational predicates, instead of only constants and evaluable functions in [Lee94]. We first show that this extension is not a trivial one as extra database retrieval cost is incurred. We then present a new method to construct a pretest which can be incorporated in most existing methods to reduce the average checking costs in terms of database accesses by a significant factor. Our method also differs from other partial checking methods as we can handle multiple updates. 1
Linear tabulated resolution based on Prolog control strategy
, 2000
"... Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for functionfree logic programs. Tabling seems ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for functionfree logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDTresolution, SLGresolution, and Tabulated SLSresolution, are nonlinear because they rely on the solutionlookup mode in formulating tabling. The principal disadvantage of nonlinear resolutions is that they cannot be implemented using a simple stackbased memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TPresolution. TPresolution has two distinctive features: (1) It makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound and complete for positive logic programs with the boundedtermsize property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.
Complexity of Nonrecursive Logic Programs with Complex Values
 In Proceedings of the 17th ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems (PODS’98
, 1998
"... We investigate complexity of the SUCCESS problem for logic query languages with complex values: check whether a query defines a nonempty set. The SUCCESS problem for recursive query languages with complex values is undecidable, so we study the complexity of nonrecursive queries. By complex values we ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
We investigate complexity of the SUCCESS problem for logic query languages with complex values: check whether a query defines a nonempty set. The SUCCESS problem for recursive query languages with complex values is undecidable, so we study the complexity of nonrecursive queries. By complex values we understand values such as trees, finite sets, and multisets. Due to the wellknown correspondence between relational query languages and datalog, our results can be considered as results about relational query languages with complex values. The paper gives a complete complexity classification of the SUCCESS problem for nonrecursive logic programs over trees depending on the underlying signature, presence of negation, and range restrictedness. We also prove several results about finite sets and multisets. 1 Introduction A number of complexity results have been established for logic query languages. They are surveyed in [49, 18]. The major themes in these results are the complexity and expr...
Abstract domains based on regular types
 Proceedings of the 20th International Conference on Logic Programming, volume 3132 of LNCS
, 2004
"... Abstract. We show how to transform a set of regular type definitions into a finite preinterpretation for a logic program. The derived preinterpretation forms the basis for an abstract interpretation. The core of the transformation is a determinization procedure for nondeterministic finite tree aut ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We show how to transform a set of regular type definitions into a finite preinterpretation for a logic program. The derived preinterpretation forms the basis for an abstract interpretation. The core of the transformation is a determinization procedure for nondeterministic finite tree automata. This approach provides a flexible and practical way of building programspecific analysis domains. We argue that the constructed domains are condensing: thus goalindependent analysis over the constructed domains loses no precision compared to goaldependent analysis. We also show how instantiation modes such as ground, variable and nonvariable can be expressed as regular types and hence integrated with other regular types. We highlight applications in binding time analysis for offline partial evaluation and infinitestate model checking. Experimental results and a discussion of complexity are included. 1
Decompilation: The Enumeration of Types and Grammars
 ACM Transactions on Programming Languages and Systems (TOPLAS
, 1992
"... While a compiler produces object code from source code, a decompiler produces source code from object code, and has applications in the testing and validation of safetycritical software. Decompiling an object code provides an independent demonstration of correctness that is hard to better for indus ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
While a compiler produces object code from source code, a decompiler produces source code from object code, and has applications in the testing and validation of safetycritical software. Decompiling an object code provides an independent demonstration of correctness that is hard to better for industrial purposes (the alternative is to prove the compiler correct). But although compiler compilers are in common use in the software industry, a decompiler compiler is much more unusual. It turns out that a data type specification representing a programming language grammar can be remolded into a functional program that enumerates all the abstract syntax trees. This observation is the springboard for a general method for compiling decompilers from the specifications of (nonoptimizing) compilers. This paper deals with methods and theory, together with an application of the technique. The correctness of a decompiler generated from the specification for a simple occamlike compiler is demonstrated.
DISLOG  A Disjunctive Deductive Database Prototype
 PROC. TWELFTH WORKSHOP ON LOGIC PROGRAMMING (WLP'97
, 1997
"... DISLOG is a system for reasoning in disjunctive deductive databases. It seeks to combine features of disjunctive logic programming, such as the support for incomplete information, with those of deductive databases, such as allresult inference capabilities. Several basic operators are provided for ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
DISLOG is a system for reasoning in disjunctive deductive databases. It seeks to combine features of disjunctive logic programming, such as the support for incomplete information, with those of deductive databases, such as allresult inference capabilities. Several basic operators are provided for logical and nonmonotonic reasoning: The logical consequence operator derives all logically implied disjunctive clauses from a disjunctive database. The nonmonotonic operators are semantically founded on generalizations of the wellknown closedworldassumption and the negationasfailure concept. Reasoning in disjunctive deductive databases is very complex, even for small examples. Many different optimization techniques are integrated in DISLOG to speed up the application performance. The clause tree is used as a data structure that allows for an efficient and transparent evaluation. The DISLOGsystem has been developed in PROLOG  currently a core part of DISLOG is reimplemented ...
Robust and Scalable Linked Data Reasoning Incorporating Provenance and Trust Annotations
, 2011
"... In this paper, we leverage annotated logic programs for tracking indicators of provenance and trust during reasoning, specifically focussing on the usecase of applying a scalable subset of OWL 2 RL/RDF rules over static corpora of arbitrary Linked Data (Web data). Our annotations encode three facet ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
In this paper, we leverage annotated logic programs for tracking indicators of provenance and trust during reasoning, specifically focussing on the usecase of applying a scalable subset of OWL 2 RL/RDF rules over static corpora of arbitrary Linked Data (Web data). Our annotations encode three facets of information: (i) blacklist: a (possibly manually generated) boolean annotation which indicates that the referent data are known to be harmful and should be ignored during reasoning; (ii) ranking: a numeric value derived by a PageRankinspired technique—adapted for Linked Data—which determines the centrality of certain data artefacts (such as RDF documents and statements); (iii) authority: a boolean value which uses Linked Data principles to conservatively determine whether or not some terminological information can be trusted. We formalise a logical framework which annotates inferences with the strength of derivation along these dimensions of trust and provenance; we formally demonstrate some desirable properties of the deployment of annotated logic programming in our setting, which guarantees (i) a unique minimal model (least fixpoint); (ii) monotonicity; (iii) finitariness; and (iv) finally decidability. In so doing, we also give some formal results which reveal strategies for scalable and efficient implementation of various reasoning tasks one might consider. Thereafter, we discuss scalable and distributed implementation strategies for applying our ranking and reasoning methods over a cluster of commodity hardware; throughout, we provide evaluation of our methods over 1 billion Linked Data quadruples crawled from approximately 4 million individual Web documents, empirically demonstrating the scalability of our approach, and how our