Results 1  10
of
12
From Datalog rules to efficient programs with time and space guarantees
 In PPDP ’03: Proceedings of the 5th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming
, 2003
"... This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations of facts that lead to all hypotheses of a rule being simultaneously true are considered, and each such combination is considered exactly once. The associated space usage is optimal in that it is the minimum space needed for such consideration modulo scheduling optimizations that may eliminate some summands in the space usage formula. The transformation is based on a general method for algorithm design that exploits fixedpoint computation, incremental maintenance of invariants, and combinations of indexed and linked data structures. We apply the method to a number of analysis problems, some with improved algorithm complexities and all with greatly improved algorithm understanding and greatly simplified complexity analysis.
Automatic Accurate CostBound Analysis for HighLevel Languages
, 2001
"... This paper describes a languagebased approach for automatic and accurate costbound analysis. The approach consists of transformations for building costbound functions in the presence of partially known input structures, symbolic evaluation of the costbound function based on input size parameter ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
This paper describes a languagebased approach for automatic and accurate costbound analysis. The approach consists of transformations for building costbound functions in the presence of partially known input structures, symbolic evaluation of the costbound function based on input size parameters, and optimizations to make the overall analysis ecient as well as accurate, all at the sourcelanguage level. The calculated cost bounds are expressed in terms of primitive cost parameters. These parameters can be obtained based on the language implementation or be measured conservatively or approximately, yielding accurate, conservative, or approximate time or space bounds. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The results helped conrm the accuracy of the analysis.
Optimizing Ackermann's Function by Incrementalization
, 2001
"... This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at the ste ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at the step of the increment. This eliminates repeated subcomputations in executions that follow the straightforward recursive definition of Ackermann's function, yielding an optimized program that is drastically faster and takes extremely little space. This case study uniquely shows the power and limitation of the incrementalization method, as well as both the iterative and recursive nature of computation underlying the optimized Ackermann's function.
Program Optimization Using Indexed and Recursive Data Structures
, 2002
"... This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determi ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determining appropriate additional values to maintain in appropriate data structures, based on what values are needed in computation on an incremented input and how these values can be established and accessed. Once these two are determined, the method extends the original program to return the additional values, derives an incremental version of the extended program, and forms an optimized program that repeatedly calls the incremental program. The method can derive all dynamic programming algorithms found in standard algorithm textbooks. There are many previous methods for deriving efficient algorithms, but none is as simple, general, and systematic as ours.
Program Generation, Termination, and Bindingtime Analysis
, 2002
"... Recent research suggests that the goal of fully automatic and reliable program generation for a broad range of applications is coming nearer to feasibility. However, several interesting and challenging problems remain to be solved before it becomes a reality. Solving them is also necessary, if we ho ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Recent research suggests that the goal of fully automatic and reliable program generation for a broad range of applications is coming nearer to feasibility. However, several interesting and challenging problems remain to be solved before it becomes a reality. Solving them is also necessary, if we hope ever to elevate software engineering from its current state (a highlydeveloped handiwork) into a successful branch of engineering, capable of solving a wide range of new problems by systematic, wellautomated and wellfounded methods.
Optimizing aggregate array computations in loops
 ACM Transactions on Programming Languages and Systems
, 2005
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional values not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. For aggregate array computations that have significant redundancy, incrementalization produces drastic speedup compared to previous optimizations; when there is little redundancy, the benefit might be offset by cache effects and other factors. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays. 1
Solving Regular Tree Grammar Based Constraints
 In Proceedings of the 8th International Static Analysis Symposium
, 2000
"... This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design an ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design and complexity analysis is general and applies to other program analysis problems as well. The method is centered around Paige's finite differencing, i.e., computing expensive set expressions incrementally, and allows the algorithm to be derived and analyzed formally and implemented easily. We study higherlevel transformations that make the derived algorithm concise and allow its complexity to be analyzed accurately. Although a rough analysis shows that the worstcase time complexity is cubic in program size, an accurate analysis shows that it is linear in the number of live program points and in other parameters, including mainly the arity of data constructors and the number of selector applications into whose arguments the value constructed at a program point might flow. These parameters explain the performance of the analysis in practice. Our implementation also runs two to ten times as fast as a previous implementation of an informally designed algorithm.
Solving Regular Path Queries
 In Proceedings of the 6th International Conference on Mathematics of Program Construction
, 2002
"... Regular path queries are a way of declaratively specifying program analyses as a kind of regular expressions that are matched against paths in graph representations of programs. This paper describes the precise specication, derivation, and analysis of a complete algorithm and data structures for ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Regular path queries are a way of declaratively specifying program analyses as a kind of regular expressions that are matched against paths in graph representations of programs. This paper describes the precise specication, derivation, and analysis of a complete algorithm and data structures for solving regular path queries. The time and space complexity of the algorithm is linear in the size of the graph. We rst show two ways of specifying the problem and deriving a highlevel algorithmic solution, using predicate logic and language inclusion, respectively.
Automated software engineering using concurrent class machines
 In Proceedings of ASE'01, the 16th IEEE International Conference on Automated Software Engineering. IEEE
, 2001
"... Concurrent Class Machines are a novel statemachine model that directly captures a variety of objectoriented concepts, including classes and inheritance, objects and object creation, methods, method invocation and exceptions, multithreading and abstract collection types. The model can be understood ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Concurrent Class Machines are a novel statemachine model that directly captures a variety of objectoriented concepts, including classes and inheritance, objects and object creation, methods, method invocation and exceptions, multithreading and abstract collection types. The model can be understood as a precise definition of UML activity diagrams which, at the same time, offers an executable, objectoriented alternative to eventbased statecharts. It can also be understood as a visual, combined control and data flow model for multithreaded objectoriented programs. We first introduce a visual notation and tool for Concurrent Class Machines and discuss their benefits in enhancing system design. We then equip this notation with a precise semantics that allows us to define refinement and modular refinement rules. Finally, we summarize our work on generation of optimized code, implementation and experiments, and compare with related work. 1.
An improved differential fixpoint iteration method for program analysis
 In Proc. of the 3rd Asian Workshop on Programming Language and Systems
, 2002
"... Abstract. We present a differential fixpoint iteration method to be used in static program analyses. The differential method consists of two phases: first we transform the program analysis equations into differential ones, and then we apply a differential fixpoint iteration to the equations computin ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We present a differential fixpoint iteration method to be used in static program analyses. The differential method consists of two phases: first we transform the program analysis equations into differential ones, and then we apply a differential fixpoint iteration to the equations computing with the differences from the previous iterations. We implemented the method for the exception analysis of ML programs and also for the constant propagation and alias analysis of C and Fortran programs. For the exception analysis, our differential method saves about 2050 % computations. For the constant propagation and alias analysis, our method has a linear asymptotic performance: its cost remains linear to the height of the lattice structures. Our algorithm is not yet formally proven correct, but for the experiments the analysis results are confirmed identical to those from a nondifferential fixpoint algorithm. Both analyses are for realistic ML/C/Fortran programs. 1