Results 1  10
of
26
Finite differencing of computable expressions
, 1980
"... Finite differencing is a program optimization method that generalizes strength reduction, and provides an efficient implementation for a host of program transformations including "iterator inversion." Finite differencing is formally specified in terms of more basic transformations shown to ..."
Abstract

Cited by 128 (6 self)
 Add to MetaCart
(Show Context)
Finite differencing is a program optimization method that generalizes strength reduction, and provides an efficient implementation for a host of program transformations including "iterator inversion." Finite differencing is formally specified in terms of more basic transformations shown to preserve program semantics. Estimates of the speedup that the technique yields are given. A full illustrative example of algorithm derivation ispresented.
A Survey of Research on Deductive Database Systems
 JOURNAL OF LOGIC PROGRAMMING
, 1993
"... The area of deductive databases has matured in recent years, and it now seems appropriate to re ect upon what has been achieved and what the future holds. In this paper, we provide an overview of the area and briefly describe a number of projects that have led to implemented systems. ..."
Abstract

Cited by 117 (7 self)
 Add to MetaCart
The area of deductive databases has matured in recent years, and it now seems appropriate to re ect upon what has been achieved and what the future holds. In this paper, we provide an overview of the area and briefly describe a number of projects that have led to implemented systems.
Symbolic Analysis for Parallelizing Compilers
, 1994
"... Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program va ..."
Abstract

Cited by 109 (4 self)
 Add to MetaCart
Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program variables in the program symbol table and their exponents. The latter sequence is also lexicographically ordered. For example, the abstract value of the symbolic expression 2ij+3jk in an environment that i is bound to (1; (( " i ; 1))), j is bound to (1; (( " j ; 1))), and k is bound to (1; (( " k ; 1))) is ((2; (( " i ; 1); ( " j ; 1))); (3; (( " j ; 1); ( " k ; 1)))). In our framework, environment is the abstract analogous of state concept; an environment is a function from program variables to abstract symbolic values. Each environment e associates a canonical symbolic value e x for each variable x 2 V ; it is said that x is bound to e x. An environment might be represented by...
Transformational design and implementation of a new efficient solution to the ready simulation problem
 Sci. Comp. Program
, 1995
"... ..."
(Show Context)
Rule ordering in bottomup fixpoint evaluation of logic programs
 IEEE Transactions on Knowledge and Data Engineering
, 1990
"... Abstract Logic programs can be evaluated bottomup by repeatedly applying all rules, in "iterations", until the fixpoint is reached. However, it is often desirableand in some cases, e.g. programs with stratified negation, even necessary to guarantee the semanticsto apply the ru ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
(Show Context)
Abstract Logic programs can be evaluated bottomup by repeatedly applying all rules, in &quot;iterations&quot;, until the fixpoint is reached. However, it is often desirableand in some cases, e.g. programs with stratified negation, even necessary to guarantee the semanticsto apply the rules in some order. We present two algorithms that apply rules in a specified order without repeating inferences. One of them (GSN) is capable of dealing with a wide range of rule orderings but with a little more overhead than the wellknown seminaive algorithm (which we call BSN). The other (PSN) handles a smaller class of rule orderings, but with no overheads beyond those in BSN. We also demonstrate that by choosing a good ordering, we can reduce the number of rule applications (and thus joins). We present a theoretical analysis of rule orderings and identify orderings that minimize the number of rule applications (for all possible instances of the base relations) with respect to a class of orderings called fair orderings. We also show that while nonfair orderings may do a little better on some data sets, they can do much worse on others. The analysis is supplemented by performance results.
Symbolic Program Analysis and Optimization for Parallelizing Compilers
 Presented at the 5th Annual Workshop on Languages and Compilers for Parallel Computing
, 1992
"... A program flow analysis framework is proposed for parallelizing compilers. Within this framework, symbolic analysis is used as an abstract interpretation technique to solve many of the flow analysis problems in a unified way. Some of these problems are constant propagation, global forward substituti ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
(Show Context)
A program flow analysis framework is proposed for parallelizing compilers. Within this framework, symbolic analysis is used as an abstract interpretation technique to solve many of the flow analysis problems in a unified way. Some of these problems are constant propagation, global forward substitution, detection of loop invariant computations, and induction variable substitution. The solution space of the above problems is much larger than that handled by existing compiler technology. It covers many of the cases in benchmark codes that other parallelizing compilers can not handle. Employing finite difference methods, the symbolic analyzer derives a functional representation of programs, which is used in dependence analysis. A systematic method for generalized strength reduction based on this representation is also presented. This results in an effective scheme for exploitation of parallelism and optimization of the code. Symbolic analysis also serves as a basis for other code generatio...
Operator Strength Reduction
, 1995
"... This paper presents a new al gS ithm for operator strengM reduction, called OSR. OSR improves upon an earlier alg orithm due to Allen, Cocke, and Kennedy [Allen et al. 1981]. OSR operates on the static sing e assig4 ent (SSA) form of a procedure [Cytron et al. 1991]. By taking advantag of the pr ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
This paper presents a new al gS ithm for operator strengM reduction, called OSR. OSR improves upon an earlier alg orithm due to Allen, Cocke, and Kennedy [Allen et al. 1981]. OSR operates on the static sing e assig4 ent (SSA) form of a procedure [Cytron et al. 1991]. By taking advantag of the properties of SSA form, we have derived an alg ithm that is simple to understand, quick to implement, and, in practice, fast to run. Its asymptotic complexity is, in the worst case, the same as the Allen, Cocke, and Kennedy al gS ithm (ACK). OSR achieves optimization results that are equivalent to those obtained with the ACK alg orithm. OSR has been implemented in several research and production compilers
Mechanical Translation of Set Theoretic Problem Specifications Into Efficient RAM Code  A Case Study
 Proc. EUROCAL 85
, 1985
"... This paper illustrates a fully automatic topdown approach to program development in which formal problem specifications are mechanically translated into efficient RAM code. This code is guaranteed to be totally correct and an upper bound on its worst case asymptotic running time is automatically de ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
This paper illustrates a fully automatic topdown approach to program development in which formal problem specifications are mechanically translated into efficient RAM code. This code is guaranteed to be totally correct and an upper bound on its worst case asymptotic running time is automatically determined. The user is only required to supply the system with a formal problem specification, and is relieved of all responsibilities in the rest of the program development process. These results are obtained, in part, by greatly restricting the system to handle a class of determinate, set theoretic, tractable problems. The most essential transformational techniques that are used are fixed point iteration, finite differencing, and data structure selection. Rudimentary forms of these techniques have been implemented and used effectively in the RAPTS transformational programming system. This paper explains the conceptual underpinnings of our approach by considering the problem of attribute closure for relational databases and systematically deriving a program that implements a linear time solution. 1.
Viewing a Program Transformation System at Work
 Joint 6th International Conference on Programming Language Implementation and Logic programming (PLILP) and 4th International Conference on Algebraic and Logic Programming (ALP), LNCS 844
, 1994
"... ..."
(Show Context)
A survey of parallel execution strategies for transitive closure and logic programs
 DISTRIBUTED AND PARALLEL DATABASES
, 1993
"... An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional selectprojectjoin queries. In particu ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional selectprojectjoin queries. In particular, hashbased fragmentation is used to distribute data to disks under the control of different processors in order to perform selections and joins in parallel. With the development of new query languages, and in particular with the definition of transitive closure queries and of more general logic programming queries, the new dimension of recursion has been added to query processing. Recursive queries are complex; at the same time, their regular structure is particularly suited for parallel execution, and parallelism may give a high efficiency gain. We survey the approaches to parallel execution of recursive queries that have been presented in the recent literature. We observe that research on parallel execution of recursive queries is separated into two distinct subareas, one focused on the transitive closure of Relational Algebra expressions, the other one focused on optimization of more general Datalog queries. Though the subareas seem radically different because of the approach and formalism used, they have many common features. This is not surprising, because most typical Datalog queries can be solved by means of the transitive closure of simple