Results 1 
4 of
4
From Datalog rules to efficient programs with time and space guarantees
 In PPDP ’03: Proceedings of the 5th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming
, 2003
"... This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations of facts that lead to all hypotheses of a rule being simultaneously true are considered, and each such combination is considered exactly once. The associated space usage is optimal in that it is the minimum space needed for such consideration modulo scheduling optimizations that may eliminate some summands in the space usage formula. The transformation is based on a general method for algorithm design that exploits fixedpoint computation, incremental maintenance of invariants, and combinations of indexed and linked data structures. We apply the method to a number of analysis problems, some with improved algorithm complexities and all with greatly improved algorithm understanding and greatly simplified complexity analysis.
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
XML: model, schemas, types, logics, and queries
 IN LOGICS FOR EMERGING APPLICATIONS OF DATABASES
, 2003
"... ... In this chapter, we shall see that the suspicion is easy dispelled. We look at techniques now used in practice for dealing with XML trees, and we note how they depart from oldfashioned uses. Since trees are objects that are very complicated to manipulate directly through pointer updates, declar ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
... In this chapter, we shall see that the suspicion is easy dispelled. We look at techniques now used in practice for dealing with XML trees, and we note how they depart from oldfashioned uses. Since trees are objects that are very complicated to manipulate directly through pointer updates, declarative techniques are becoming increasingly important, especially when it comes to exploring, mining, and constructing treeshaped data. In particular, we will contrast conventional concepts of database theory such as relational calculus with that of more procedural notations for trees. We explore why the essential problem of locating data in trees is intimately linked with tree automata and decidable logics, somewhat in parallel to the link between query algebras and firstorder logic in relational database theory. So, we shall see why logic and automata create interesting new research opportunities.
Optimizing aggregate array computations in loops
 ACM Transactions on Programming Languages and Systems
, 2005
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional values not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. For aggregate array computations that have significant redundancy, incrementalization produces drastic speedup compared to previous optimizations; when there is little redundancy, the benefit might be offset by cache effects and other factors. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays. 1