Results 1  10
of
20
Symbolic Analysis for Parallelizing Compilers
, 1994
"... Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program va ..."
Abstract

Cited by 106 (4 self)
 Add to MetaCart
Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program variables in the program symbol table and their exponents. The latter sequence is also lexicographically ordered. For example, the abstract value of the symbolic expression 2ij+3jk in an environment that i is bound to (1; (( " i ; 1))), j is bound to (1; (( " j ; 1))), and k is bound to (1; (( " k ; 1))) is ((2; (( " i ; 1); ( " j ; 1))); (3; (( " j ; 1); ( " k ; 1)))). In our framework, environment is the abstract analogous of state concept; an environment is a function from program variables to abstract symbolic values. Each environment e associates a canonical symbolic value e x for each variable x 2 V ; it is said that x is bound to e x. An environment might be represented by...
A Survey of Research on Deductive Database Systems
 JOURNAL OF LOGIC PROGRAMMING
, 1993
"... The area of deductive databases has matured in recent years, and it now seems appropriate to re ect upon what has been achieved and what the future holds. In this paper, we provide an overview of the area and briefly describe a number of projects that have led to implemented systems. ..."
Abstract

Cited by 101 (6 self)
 Add to MetaCart
The area of deductive databases has matured in recent years, and it now seems appropriate to re ect upon what has been achieved and what the future holds. In this paper, we provide an overview of the area and briefly describe a number of projects that have led to implemented systems.
Transformational Design and Implementation Of A New Efficient Solution To The Ready Simulation Problem
 Science of Computer Programming
, 1995
"... A transformational methodology is described for simultaneously designing algorithms and developing programs. The methodology makes use of three transformational tools  dominated convergence, finite differencing, and realtime simulation of a set machine on a RAM. We illustrate the methodology t ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
A transformational methodology is described for simultaneously designing algorithms and developing programs. The methodology makes use of three transformational tools  dominated convergence, finite differencing, and realtime simulation of a set machine on a RAM. We illustrate the methodology to design a new O(mn + n 2 )time algorithm for deciding when nstate, mtransition processes are ready similar, which is a substantial improvement on the \Theta(mn 6 ) algorithm presented in [6]. The methodology is also used to derive a program whose performance, we believe, is competitive with the most efficient handcrafted implementation of our algorithm. Ready simulation is the finest fully abstract notion of process equivalence in the CCS setting. 1 Introduction Currently there is a wide gap between the goals and practices of research in the theory of algorithm design and the science of programming, which we believe is A preliminary version of this paper appeared in the Conf...
Rule ordering in bottomup fixpoint evaluation of logic programs
 IEEE Transactions on Knowledge and Data Engineering
, 1990
"... Abstract Logic programs can be evaluated bottomup by repeatedly applying all rules, in "iterations", until the fixpoint is reached. However, it is often desirableand in some cases, e.g. programs with stratified negation, even necessary to guarantee the semanticsto apply the rules in s ..."
Abstract

Cited by 41 (9 self)
 Add to MetaCart
Abstract Logic programs can be evaluated bottomup by repeatedly applying all rules, in "iterations", until the fixpoint is reached. However, it is often desirableand in some cases, e.g. programs with stratified negation, even necessary to guarantee the semanticsto apply the rules in some order. We present two algorithms that apply rules in a specified order without repeating inferences. One of them (GSN) is capable of dealing with a wide range of rule orderings but with a little more overhead than the wellknown seminaive algorithm (which we call BSN). The other (PSN) handles a smaller class of rule orderings, but with no overheads beyond those in BSN. We also demonstrate that by choosing a good ordering, we can reduce the number of rule applications (and thus joins). We present a theoretical analysis of rule orderings and identify orderings that minimize the number of rule applications (for all possible instances of the base relations) with respect to a class of orderings called fair orderings. We also show that while nonfair orderings may do a little better on some data sets, they can do much worse on others. The analysis is supplemented by performance results.
Symbolic Program Analysis and Optimization for Parallelizing Compilers
 Presented at the 5th Annual Workshop on Languages and Compilers for Parallel Computing
, 1992
"... A program flow analysis framework is proposed for parallelizing compilers. Within this framework, symbolic analysis is used as an abstract interpretation technique to solve many of the flow analysis problems in a unified way. Some of these problems are constant propagation, global forward substituti ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
A program flow analysis framework is proposed for parallelizing compilers. Within this framework, symbolic analysis is used as an abstract interpretation technique to solve many of the flow analysis problems in a unified way. Some of these problems are constant propagation, global forward substitution, detection of loop invariant computations, and induction variable substitution. The solution space of the above problems is much larger than that handled by existing compiler technology. It covers many of the cases in benchmark codes that other parallelizing compilers can not handle. Employing finite difference methods, the symbolic analyzer derives a functional representation of programs, which is used in dependence analysis. A systematic method for generalized strength reduction based on this representation is also presented. This results in an effective scheme for exploitation of parallelism and optimization of the code. Symbolic analysis also serves as a basis for other code generatio...
Operator Strength Reduction
, 1995
"... This paper presents a new al gS ithm for operator strengM reduction, called OSR. OSR improves upon an earlier alg orithm due to Allen, Cocke, and Kennedy [Allen et al. 1981]. OSR operates on the static sing e assig4 ent (SSA) form of a procedure [Cytron et al. 1991]. By taking advantag of the pr ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
This paper presents a new al gS ithm for operator strengM reduction, called OSR. OSR improves upon an earlier alg orithm due to Allen, Cocke, and Kennedy [Allen et al. 1981]. OSR operates on the static sing e assig4 ent (SSA) form of a procedure [Cytron et al. 1991]. By taking advantag of the properties of SSA form, we have derived an alg ithm that is simple to understand, quick to implement, and, in practice, fast to run. Its asymptotic complexity is, in the worst case, the same as the Allen, Cocke, and Kennedy al gS ithm (ACK). OSR achieves optimization results that are equivalent to those obtained with the ACK alg orithm. OSR has been implemented in several research and production compilers
Viewing A Program Transformation System At Work
 Joint 6th International Conference on Programming Language Implementation and Logic Programming (PLILP) and 4th International conference on Algebraic and Logic Programming (ALP), volume 844 of Lecture Notes in Computer Science
, 1994
"... How to decrease labor and improve reliability in the development of efficient implementations of nonnumerical algorithms and labor intensive software is an increasingly important problem as the demand for computer technology shifts from easier applications to more complex algorithmic ones; e.g., opt ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
How to decrease labor and improve reliability in the development of efficient implementations of nonnumerical algorithms and labor intensive software is an increasingly important problem as the demand for computer technology shifts from easier applications to more complex algorithmic ones; e.g., optimizing compilers for supercomputers, intricate data structures to implement efficient solutions to operations research problems, search and analysis algorithms in genetic engineering, complex software tools for workstations, design automation, etc. It is also a difficult problem that is not solved by current CASE tools and software management disciplines, which are oriented towards data processing and other applications, where the implementation and a prediction of its resource utilization follow more directly from the specification. Recently, Cai and Paige reported experiments suggesting a way to implement nonnumerical algorithms in C at a programming rate (i.e., source lines per second) t...
Mechanical Translation of Set Theoretic Problem Specifications Into Efficient RAM Code  A Case Study
 Proc. EUROCAL 85
, 1985
"... This paper illustrates a fully automatic topdown approach to program development in which formal problem specifications are mechanically translated into efficient RAM code. This code is guaranteed to be totally correct and an upper bound on its worst case asymptotic running time is automatically de ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
This paper illustrates a fully automatic topdown approach to program development in which formal problem specifications are mechanically translated into efficient RAM code. This code is guaranteed to be totally correct and an upper bound on its worst case asymptotic running time is automatically determined. The user is only required to supply the system with a formal problem specification, and is relieved of all responsibilities in the rest of the program development process. These results are obtained, in part, by greatly restricting the system to handle a class of determinate, set theoretic, tractable problems. The most essential transformational techniques that are used are fixed point iteration, finite differencing, and data structure selection. Rudimentary forms of these techniques have been implemented and used effectively in the RAPTS transformational programming system. This paper explains the conceptual underpinnings of our approach by considering the problem of attribute closure for relational databases and systematically deriving a program that implements a linear time solution. 1.
A survey of parallel execution strategies for transitive closure and logic programs
 DISTRIBUTED AND PARALLEL DATABASES
, 1993
"... An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional selectprojectjoin queries. In particu ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional selectprojectjoin queries. In particular, hashbased fragmentation is used to distribute data to disks under the control of different processors in order to perform selections and joins in parallel. With the development of new query languages, and in particular with the definition of transitive closure queries and of more general logic programming queries, the new dimension of recursion has been added to query processing. Recursive queries are complex; at the same time, their regular structure is particularly suited for parallel execution, and parallelism may give a high efficiency gain. We survey the approaches to parallel execution of recursive queries that have been presented in the recent literature. We observe that research on parallel execution of recursive queries is separated into two distinct subareas, one focused on the transitive closure of Relational Algebra expressions, the other one focused on optimization of more general Datalog queries. Though the subareas seem radically different because of the approach and formalism used, they have many common features. This is not surprising, because most typical Datalog queries can be solved by means of the transitive closure of simple
Loop optimization for aggregate array computations
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows both analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional information not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. Incrementalizing aggregate array computations produces drastic program speedup compared to previous optimizations. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays.