Results 11  20
of
22
An Improved Intraprocedural Mayalias Analysis Algorithm
, 1999
"... Hind et al. ([5]) use a standard dataflow framework [15, 16] to formulate an intraprocedural mayalias computation. The intraprocedural aliasing information is computed by applying wellknown iterative techniques to the Sparse Evaluation Graph (SEG) ([3]). The computation requires a transfer funct ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Hind et al. ([5]) use a standard dataflow framework [15, 16] to formulate an intraprocedural mayalias computation. The intraprocedural aliasing information is computed by applying wellknown iterative techniques to the Sparse Evaluation Graph (SEG) ([3]). The computation requires a transfer function for each node that causes a potential pointer assignment (relating the dataflow information owing into and out of the node), and a set of aliases holding at the entry node of the SEG. The intraprocedural analysis assumes that precomputed information in the form of summary functions is available for all functioncall sites in the procedure being analyzed. The time complexity of the intraprocedural mayalias computation for the algorithm presented by Hind et al. ([5]) is O(N 6 ) in the worst case (where N is the size of the SEG). In this paper we present a worst case O(N 3 ) time algorithm to compute the same mayalias information.
Optimizing Ackermann's Function by Incrementalization
, 2001
"... This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at the ste ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at the step of the increment. This eliminates repeated subcomputations in executions that follow the straightforward recursive definition of Ackermann's function, yielding an optimized program that is drastically faster and takes extremely little space. This case study uniquely shows the power and limitation of the incrementalization method, as well as both the iterative and recursive nature of computation underlying the optimized Ackermann's function.
Program Optimization Using Indexed and Recursive Data Structures
, 2002
"... This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determi ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determining appropriate additional values to maintain in appropriate data structures, based on what values are needed in computation on an incremented input and how these values can be established and accessed. Once these two are determined, the method extends the original program to return the additional values, derives an incremental version of the extended program, and forms an optimized program that repeatedly calls the incremental program. The method can derive all dynamic programming algorithms found in standard algorithm textbooks. There are many previous methods for deriving efficient algorithms, but none is as simple, general, and systematic as ours.
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.
Optimizing aggregate array computations in loops
 ACM Transactions on Programming Languages and Systems
, 2005
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional values not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. For aggregate array computations that have significant redundancy, incrementalization produces drastic speedup compared to previous optimizations; when there is little redundancy, the benefit might be offset by cache effects and other factors. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays. 1
Solving Regular Tree Grammar Based Constraints
 In Proceedings of the 8th International Static Analysis Symposium
, 2000
"... This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design an ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design and complexity analysis is general and applies to other program analysis problems as well. The method is centered around Paige's finite differencing, i.e., computing expensive set expressions incrementally, and allows the algorithm to be derived and analyzed formally and implemented easily. We study higherlevel transformations that make the derived algorithm concise and allow its complexity to be analyzed accurately. Although a rough analysis shows that the worstcase time complexity is cubic in program size, an accurate analysis shows that it is linear in the number of live program points and in other parameters, including mainly the arity of data constructors and the number of selector applications into whose arguments the value constructed at a program point might flow. These parameters explain the performance of the analysis in practice. Our implementation also runs two to ten times as fast as a previous implementation of an informally designed algorithm.
Solving Regular Path Queries
 In Proceedings of the 6th International Conference on Mathematics of Program Construction
, 2002
"... Regular path queries are a way of declaratively specifying program analyses as a kind of regular expressions that are matched against paths in graph representations of programs. This paper describes the precise specication, derivation, and analysis of a complete algorithm and data structures for ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Regular path queries are a way of declaratively specifying program analyses as a kind of regular expressions that are matched against paths in graph representations of programs. This paper describes the precise specication, derivation, and analysis of a complete algorithm and data structures for solving regular path queries. The time and space complexity of the algorithm is linear in the size of the graph. We rst show two ways of specifying the problem and deriving a highlevel algorithmic solution, using predicate logic and language inclusion, respectively.
A Language Theoretic Approach to Algorithm and Software Development
, 1999
"... This note is a description of my research over the last few years as a doctoral student at the Courant Institute. A part of this research was collaborative work with my advisor Bob Paige. This statement is divided into three sections. The rst section is introductory, the second describes the work ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This note is a description of my research over the last few years as a doctoral student at the Courant Institute. A part of this research was collaborative work with my advisor Bob Paige. This statement is divided into three sections. The rst section is introductory, the second describes the work that will go into my dissertation, and the third section describes some other work that I have done, and some possible future directions for my research. 1 Introduction It is generally agreed that high level programming languages can notonly substantially reduce the time it takes to produce code but also help increase reliability of software. Such languages allow programs to be specied more in terms of algorithmic concepts than implementation details. A more conceptual level of discourse not only makes programs easier to write but also easier to prove correct. Widely used low level languages such as Fortran, C, C++ were designed to allow a straightforward translation of programs into ...
Incremental computation for transformational software development
"... Given a program f and an input change, w e wish to obtain an incremental program that computes f (x y) e ciently by making use of the value of f (x), the intermediate results computed in computing f (x), and auxiliary information about x that can be inexpensively maintained. Obtaining such increment ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Given a program f and an input change, w e wish to obtain an incremental program that computes f (x y) e ciently by making use of the value of f (x), the intermediate results computed in computing f (x), and auxiliary information about x that can be inexpensively maintained. Obtaining such incremental programs is an essential part of the transformationalprogramming approach to software development and enhancement. This paper presents a systematic approach that discovers a general class of useful auxiliary information, combines it with useful intermediate results, and obtains an e cient incremental program that uses and maintains these intermediate results and auxiliary information. We g i v e a n umbe r of examples from list processing, VLSI circuit design, image processing, etc. 1
Automatic Derivation of Logic Programs by Transformation
 Course notes for ESSLLI
, 2000
"... We present the program transformation methodology for the automatic development of logic programs based on the rules + strategies approach. We consider both definite programs and normal programs and we present the basic transformation rules and strategies which are described in the literature. To il ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present the program transformation methodology for the automatic development of logic programs based on the rules + strategies approach. We consider both definite programs and normal programs and we present the basic transformation rules and strategies which are described in the literature. To illustrate the power of the program transformation approach we also give some examples of program development. Finally, we show how to use program transformations for proving properties of predicates and synthesizing programs from logical specifications.