Results 11  20
of
31
The CORAL Deductive System
 THE VLDB JOURNAL
, 1994
"... CORAL is a deductive system which supports a rich declarative language, and an interface to C++ which allows for a combination of declarative and imperative programming. The declarative query language supports general Horn clauses augmented with complex terms, setgrouping, aggregation, negation, ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
CORAL is a deductive system which supports a rich declarative language, and an interface to C++ which allows for a combination of declarative and imperative programming. The declarative query language supports general Horn clauses augmented with complex terms, setgrouping, aggregation, negation, and relations with tuples that contain (universally quantified) variables. A CORAL declarative program can be organized as a collection of interacting modules. The CORAL implementation supports a wide range of evaluation strategies, and automatically chooses an efficient evaluation strategy for each module in the program. In addition, users are permitted to guide query optimization, if desired, by selecting from among a wide range of control choices at the level of each module. The CORAL system provides imperative constructs such as update, insert and delete rules. CORAL also has an interface with C++, and users can program in a combination of declarative CORAL and C++ extended with ...
D'ej`a vu in fixpoints of logic programs
 in Proceedings of the North American Conference on Logic Programming
, 1989
"... We investigate properties of logic programs that permit refinements in their fixpoint evaluation and shed light on the choice of control strategy. A fundamental aspect of a bottomup computation is that we must constantly check to see if the fixpoint has been reached. If the computation iteratively ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
We investigate properties of logic programs that permit refinements in their fixpoint evaluation and shed light on the choice of control strategy. A fundamental aspect of a bottomup computation is that we must constantly check to see if the fixpoint has been reached. If the computation iteratively applies all rules, bottomup, until the fixpoint is reached, this amounts to checking if any new facts were produced after each iteration. Such a check also enhances efficiency in that duplicate facts need not be reused in subsequent iterations, if we use the Seminaive fixpoint evaluation strategy. However, the cost of this check is a significant component of the cost of bottomup fixpoint evaluation, and for many programs the full check is unnecessary. We identify properties of programs that enable us to infer that a much simpler check (namely, whether any fact was produced in the previous iteration) suffices. While it is in general undecidable whether a given program has these properties, we develop techniques to test sufficient conditions, and we illustrate these techniques on some simple programs that have these properties. The significance of our results lies in the significantly larger class of programs for which bottomup evaluation methods, enhanced with the optimizations that we propose, become competitive with standard (topdown) implementations of logic programs. This increased efficiency is achieved without compromising the completeness of the bottomup approach; this is in contrast to the incompleteness that accompanies the depthfirst search strategy that is central to most topdown implementations.
CostBased Optimization for Magic: Algebra and Implementation
 In Proc. of ACM SIGMOD
, 1996
"... Magic sets rewriting is a wellknown optimization heuristic for complex decisionsupport queries. There can be many variants of this rewriting even for a single query, which differ greatly in execution performance. We propose costbased techniques for selecting an efficient variant from the many cho ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Magic sets rewriting is a wellknown optimization heuristic for complex decisionsupport queries. There can be many variants of this rewriting even for a single query, which differ greatly in execution performance. We propose costbased techniques for selecting an efficient variant from the many choices. Our first contribution is a practical scheme that modelsmagic sets rewriting as a special join method that can be added to any costbased query optimizer. We derive cost formulas that allow an optimizer to choose the best variant of the rewriting and to decide whether it is beneficial. The order of complexity of the optimization process is preserved by limiting the search space in a reasonable manner. We have implemented this technique in IBM's DB2 C/S V2 database system. Our performance measurements demonstrate that the costbasedmagic optimization technique performs well, and that without it, several poor decisions could be made. Our second contribution is a formal algebraic model of ...
Propagating Constraints in Recursive Deductive Databases
 In Proceedings of the First North American Conference on Logic Programming
, 1989
"... In traditional database systems, as in deductive databases that do not contain recursive rules, the efficient retrieval of tuples satisfying a constraint such as "retrieve all people who earn more than 30,000 dollars" is crucial to the performance of the database system. We investigate an algorithm ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
In traditional database systems, as in deductive databases that do not contain recursive rules, the efficient retrieval of tuples satisfying a constraint such as "retrieve all people who earn more than 30,000 dollars" is crucial to the performance of the database system. We investigate an algorithm which permits the "early use" of constraints to guide a bottomup computation of recursive rules. The algorithm is an adaptation of the wellknown fold/unfold transformations and is designed to work in conjunction with the magic set approach. A contribution of the algorithm is that it generalises the ability of magicsets to transform a program, traditionally based on argument bindings within a rule or query, to the case of arguments in a rule or query which are constrained, but not necessarily bound. In addition, it permits the implementation of a bidirectional sip (sideways information passing strategy). 1 Introduction The field of Deductive Databases [8], is concerned with developing...
A survey of parallel execution strategies for transitive closure and logic programs
 DISTRIBUTED AND PARALLEL DATABASES
, 1993
"... An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional selectprojectjoin queries. In particu ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional selectprojectjoin queries. In particular, hashbased fragmentation is used to distribute data to disks under the control of different processors in order to perform selections and joins in parallel. With the development of new query languages, and in particular with the definition of transitive closure queries and of more general logic programming queries, the new dimension of recursion has been added to query processing. Recursive queries are complex; at the same time, their regular structure is particularly suited for parallel execution, and parallelism may give a high efficiency gain. We survey the approaches to parallel execution of recursive queries that have been presented in the recent literature. We observe that research on parallel execution of recursive queries is separated into two distinct subareas, one focused on the transitive closure of Relational Algebra expressions, the other one focused on optimization of more general Datalog queries. Though the subareas seem radically different because of the approach and formalism used, they have many common features. This is not surprising, because most typical Datalog queries can be solved by means of the transitive closure of simple
TopDown vs. BottomUp Revisited
 In Proceedings of the International Logic Programming Symposium
, 1991
"... Ullman ([Ull89a, Ull89b]) has shown that for the evaluation of safe Datalog programs, bottomup evaluation using Magic Sets optimization has time complexity less than or equal to a particular topdown strategy, Queuebased Rule Goal Tree (QRGT) evaluation. This result has sometimes been incorrectly i ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Ullman ([Ull89a, Ull89b]) has shown that for the evaluation of safe Datalog programs, bottomup evaluation using Magic Sets optimization has time complexity less than or equal to a particular topdown strategy, Queuebased Rule Goal Tree (QRGT) evaluation. This result has sometimes been incorrectly interpreted to mean that bottomup evaluation beats topdown evaluation for evaluating Datalog programstopdown strategies such as Prolog (which does no memoing, and uses last call optimization) can beat both QRGT and bottomup evaluation on some Datalog programs. In this paper we compare a Prolog evaluation based on the WAM model (using last call optimization) with a bottomup execution based on Magic Templates with Tail Recursion optimization ([Ros91]), and show the following: (1) Bottomup evaluation makes no more inferences than Prolog for rangerestricted programs. (2) For a restricted class of programs (which properly includes safe Datalog) the cost of bottomup evaluation is never ...
Regular Approximations of Logic Programs and Their Uses
, 1992
"... Regular approximations of logic programs have a variety of uses, including static analysis for debugging, program specialisation, and machine learning. An algorithm for computing a regular approximation of a normal program is given, and some applications are discussed. The analysis of a “magic set ” ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Regular approximations of logic programs have a variety of uses, including static analysis for debugging, program specialisation, and machine learning. An algorithm for computing a regular approximation of a normal program is given, and some applications are discussed. The analysis of a “magic set ” style of transformation of a program P can be used to derive more precise approximations than can be obtained from P itself. The approximation algorithm given here can also be applied to Prolog programs. 2 1
Analyzing Logic Programs Using "Prop"Ositional Logic Programs and a Magic Wand
 The Journal of Logic Programming
, 1997
"... This paper illustrates the role of a class of "prop"ositional logic programs in the analysis of complex properties of logic programs. Analyses are performed by abstracting Prolog programs to corresponding "prop"ositional logic programs which approximate the original programs and have finite mean ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper illustrates the role of a class of "prop"ositional logic programs in the analysis of complex properties of logic programs. Analyses are performed by abstracting Prolog programs to corresponding "prop"ositional logic programs which approximate the original programs and have finite meanings. We focus on a groundness analysis which is equivalent to that obtained by abstract interpretation using the domain Prop. The main contribution is in the ease in which a highly efficient implementation of the analysis is obtained. The implementation is bottomup and provides approximations of a program's success patterns. Goal dependent information such as call patterns is obtained using a magicset transformation. A novel compositional approach is applied so that call patterns for arbitrary goals are derived in a precise and efficient way. 1 INTRODUCTION Groundness analysis is one of the more important analyses for logic programs. The knowledge that a given program variable wil...
Analysis and Transformation of Proof Procedures
, 1994
"... Automated theorem proving has made great progress during the last few decades. Proofs of more and more difficult theorems are being found faster and faster. However, the exponential increase in the size of the search space remains for many theorem proving problems. Logic program analysis and transfo ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Automated theorem proving has made great progress during the last few decades. Proofs of more and more difficult theorems are being found faster and faster. However, the exponential increase in the size of the search space remains for many theorem proving problems. Logic program analysis and transformation techniques have also made progress during the last few years and automated theorem proving can benefit from these techniques if they can be made applicable to general theorem proving problems. In this thesis we investigate the applicability of logic program analysis and transformation techniques to automated theorem proving. Our aim is to speed up theorem provers by avoiding useless search. This is done by detecting and deleting parts of the theorem prover and theory under consideration that are not needed for proving a given formula. The analysis and transformation techniques developed for logic programs can be applied in automated theorem proving via a programming technique called ...
Parallelism in Logic Programs
 In Proceedings of the Seventeenth Annual ACM Symposium on Principles of Programming Languages
, 1990
"... There is a tension between the objectives of avoiding irrelevant computation and extracting parallelism, in that a computational step used to restrict another must precede the latter. Our thesis, following [BeR87], is that evaluation methods can be viewed as implementing a choice of sideways informa ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
There is a tension between the objectives of avoiding irrelevant computation and extracting parallelism, in that a computational step used to restrict another must precede the latter. Our thesis, following [BeR87], is that evaluation methods can be viewed as implementing a choice of sideways information propagation graphs, or sips, which determines the set of goals and facts that must be evaluated. Two evaluation methods that implement the same sips can then be compared to see which obtains a greater degree of parallelism, and we provide a formal measure of parallelism to make this comparison. Using this measure, we prove that transforming a program using the Magic Templates algorithm and then evaluating the fixpoint bottomup provides a "most parallel" implementation for a given choice of sips, without taking resource constraints into account. This result, taken in conjunction with earlier results from [BeR87, Ra88], which show that bottomup evaluation performs no irrelevant computat...