Results 1  10
of
11
Deriving incremental programs
, 1993
"... A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of cachin ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of caching, in order to provide a degree of incrementality not otherwise achievable by a generic incremental evaluator. 1
From Datalog rules to efficient programs with time and space guarantees
 In PPDP ’03: Proceedings of the 5th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming
, 2003
"... This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations ..."
Abstract

Cited by 33 (12 self)
 Add to MetaCart
(Show Context)
This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations of facts that lead to all hypotheses of a rule being simultaneously true are considered, and each such combination is considered exactly once. The associated space usage is optimal in that it is the minimum space needed for such consideration modulo scheduling optimizations that may eliminate some summands in the space usage formula. The transformation is based on a general method for algorithm design that exploits fixedpoint computation, incremental maintenance of invariants, and combinations of indexed and linked data structures. We apply the method to a number of analysis problems, some with improved algorithm complexities and all with greatly improved algorithm understanding and greatly simplified complexity analysis.
Lazy Strength Reduction
 Journal of Programming Languages
"... We present a bitvector algorithm that uniformly combines code motion and strength reduction, avoids superfluous register pressure due to unnecessary code motion, and is as efficient as standard unidirectional analyses. The point of this algorithm is to combine the concept of lazy code motion of [1] ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
(Show Context)
We present a bitvector algorithm that uniformly combines code motion and strength reduction, avoids superfluous register pressure due to unnecessary code motion, and is as efficient as standard unidirectional analyses. The point of this algorithm is to combine the concept of lazy code motion of [1] with the concept of unifying code motion and strength reduction of [2, 3, 4, 5]. This results in an algorithm for lazy strength reduction, which consists of a sequence of unidirectional analyses, and is unique in its transformational power. Keywords: Data flow analysis, program optimization, partial redundancy elimination, code motion, strength reduction, bitvector data flow analyses. 1 Motivation Code motion improves the runtime efficiency of a program by avoiding unnecessary recomputations of a value at runtime. Strength reduction improves runtime efficiency by reducing "expensive" recomputations to less expensive ones, e.g., by reducing computations involving multiplication to computat...
A simple inductive synthesis methodology and its applications
, 2010
"... Given a highlevel specification and a lowlevel programming language, our goal is to automatically synthesize an efficient program that meets the specification. In this paper, we present a new algorithmic methodology for inductive synthesis that allows us to do this. We use Second Order logic as ou ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
Given a highlevel specification and a lowlevel programming language, our goal is to automatically synthesize an efficient program that meets the specification. In this paper, we present a new algorithmic methodology for inductive synthesis that allows us to do this. We use Second Order logic as our generic high level specification logic. For our lowlevel languages we choose small applicationspecific logics that can be immediately translated into code that runs in expected linear time in the worst case. We explain our methodology and provide examples of the synthesis of several graph classifiers, e.g, lineartime tests of whether the input graph is connected, acyclic, etc. In another set of applications we automatically derive many finite differencing expressions equivalent to ones that Paige built by hand in his thesis [Pai81]. Finally we describe directions for automatically combining such automatically generated building blocks to synthesize efficient code implementing more complicated specifications. The methods in this paper have been implemented in Python using the SMT solver Z3 [dMB].
Efficient Code Motion and an Adaption to Strength Reduction
 In Proceedings of the 4th International Joint Conference on TAPSOFT
, 1991
"... this paper we consider two elaborations of this algorithm, which are dealt with in Part I and Part II, respectively. Part I deals with the problem that the full variant of the algorithm of [SKR1] may excessively introduce trivial redefinitions of registers in order to cover a single computation. Ros ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
this paper we consider two elaborations of this algorithm, which are dealt with in Part I and Part II, respectively. Part I deals with the problem that the full variant of the algorithm of [SKR1] may excessively introduce trivial redefinitions of registers in order to cover a single computation. Rosen, Wegman and Zadeck avoided such a too excessive introduction of trivial redefinitions by means of some practically oriented restrictions, and they proposed an efficient algorithm, which optimally moves the computations of acyclic flow graphs under these additional constraints (the algorithm is "RWZoptimal " for acyclic flow graphs) [RWZ]. Here we adapt our algorithm to this notion of optimality. The result is a modular and efficient algorithm, which avoids a too excessive introduction of trivial redefinitions along the lines of [RWZ], and is RWZoptimal for arbitrary flow graphs. Part II modularly extends the algorithm of [SKR1] in order to additionally cover strength reduction. This extension generalizes and improves all classical techniques for strength reduction in that it overcomes their structural restrictions concerning admissible program structures (e.g. previously determined loops) and admissible term structures (e.g. terms built of induction variables and region constants). Additionally, the program transformation obtained by our algorithm is guaranteed to be safe and to improve runtime efficiency. Both properties are not guaranteed by previous techniques. Structure of the Paper
Solving Regular Tree Grammar Based Constraints
 In Proceedings of the 8th International Static Analysis Symposium
, 2000
"... This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design an ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design and complexity analysis is general and applies to other program analysis problems as well. The method is centered around Paige's finite differencing, i.e., computing expensive set expressions incrementally, and allows the algorithm to be derived and analyzed formally and implemented easily. We study higherlevel transformations that make the derived algorithm concise and allow its complexity to be analyzed accurately. Although a rough analysis shows that the worstcase time complexity is cubic in program size, an accurate analysis shows that it is linear in the number of live program points and in other parameters, including mainly the arity of data constructors and the number of selector applications into whose arguments the value constructed at a program point might flow. These parameters explain the performance of the analysis in practice. Our implementation also runs two to ten times as fast as a previous implementation of an informally designed algorithm.
Comparing Three Approaches to Transformational Programming
, 1991
"... Transformational programming is a methodology that intends to formalize the development of programs from problem specifications. Given the recent effort towards the design of a common prototyping system (CPS) for the Ada programming language, transformation systems may be reconsidered as possible ..."
Abstract
 Add to MetaCart
Transformational programming is a methodology that intends to formalize the development of programs from problem specifications. Given the recent effort towards the design of a common prototyping system (CPS) for the Ada programming language, transformation systems may be reconsidered as possible components of prototyping systems. This paper examines and evaluates three approaches to transformational programming: ffl The Munich CIP project (Computeraided, Intuitionguided Programming) consists of a strongly typed, widespectrum language with userdefined algebraic types and a semiautomatic transformation system that requires user guidance. ffl By contrast, "Algorithmics," the work on algebraic specification originating from IFIP WG 2.1, is a pure pencilandpaper approach to transformational programming. It provides a concise, uniform mathematical notation and includes work on nondeterminism. ffl RAPTS (Robert A. Paige's Transformation System) is a fully mechanical system t...
A Comparison of Three Approaches to Transformational Programming
, 1991
"... Transformational programming is a methodology that intends to formalize the development of programs from problem specifications. Given the recent effort towards the design of a common prototyping system (CPS) for the Ada programming language, transformation systems may be reconsidered as possible co ..."
Abstract
 Add to MetaCart
Transformational programming is a methodology that intends to formalize the development of programs from problem specifications. Given the recent effort towards the design of a common prototyping system (CPS) for the Ada programming language, transformation systems may be reconsidered as possible components of prototyping systems. This paper examines and evaluates three approaches to transformational programming: ffl The Munich CIP project (Computeraided, Intuitionguided Programming) consists of a strongly typed, widespectrum language with userdefined algebraic types and a semiautomatic transformation system that requires user guidance. ffl By contrast, "Algorithmics," the work on algebraic specification originating from IFIP WG 2.1, is a pure pencilandpaper approach to transformational programming. It provides a concise, uniform mathematical notation and includes work on nondeterminism. ffl RAPTS (Robert A. Paige's Transformation System) is a fully mechanical system that tran...