Results 1  10
of
45
A relational approach to interprocedural shape analysis
 In 11th SAS
, 2004
"... Abstract. This paper addresses the verification of properties of imperative programs withrecursive procedure calls, heapallocated storage, and destructive updating of pointervalued fieldsi.e., interprocedural shape analysis. It presents a way to harness some previouslyknown approaches to interpr ..."
Abstract

Cited by 45 (11 self)
 Add to MetaCart
Abstract. This paper addresses the verification of properties of imperative programs withrecursive procedure calls, heapallocated storage, and destructive updating of pointervalued fieldsi.e., interprocedural shape analysis. It presents a way to harness some previouslyknown approaches to interprocedural dataflow analysiswhich in past work have been applied only to much less rich settingsfor interprocedural shape analysis. 1 Introduction This paper concerns techniques for static analysis of recursive programs that manipulateheapallocated storage and perform destructive updating of pointervalued fields. The goal is to recover shape descriptors that provide information about the characteristicsof the data structures that a program's pointer variables can point to. Such information can be used to help programmers understand certain aspects of the program's behavior,to verify properties of the program, and to optimize or parallelize the program.
Transformational Design and Implementation Of A New Efficient Solution To The Ready Simulation Problem
 Science of Computer Programming
, 1995
"... A transformational methodology is described for simultaneously designing algorithms and developing programs. The methodology makes use of three transformational tools  dominated convergence, finite differencing, and realtime simulation of a set machine on a RAM. We illustrate the methodology t ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
A transformational methodology is described for simultaneously designing algorithms and developing programs. The methodology makes use of three transformational tools  dominated convergence, finite differencing, and realtime simulation of a set machine on a RAM. We illustrate the methodology to design a new O(mn + n 2 )time algorithm for deciding when nstate, mtransition processes are ready similar, which is a substantial improvement on the \Theta(mn 6 ) algorithm presented in [6]. The methodology is also used to derive a program whose performance, we believe, is competitive with the most efficient handcrafted implementation of our algorithm. Ready simulation is the finest fully abstract notion of process equivalence in the CCS setting. 1 Introduction Currently there is a wide gap between the goals and practices of research in the theory of algorithm design and the science of programming, which we believe is A preliminary version of this paper appeared in the Conf...
Deriving incremental programs
, 1993
"... A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of cachin ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of caching, in order to provide a degree of incrementality not otherwise achievable by a generic incremental evaluator. 1
From Datalog rules to efficient programs with time and space guarantees
 In PPDP ’03: Proceedings of the 5th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming
, 2003
"... This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
This paper describes a method for transforming any given set of Datalog rules into an efficient specialized implementation with guaranteed worstcase time and space complexities, and for computing the complexities from the rules. The running time is optimal in the sense that only useful combinations of facts that lead to all hypotheses of a rule being simultaneously true are considered, and each such combination is considered exactly once. The associated space usage is optimal in that it is the minimum space needed for such consideration modulo scheduling optimizations that may eliminate some summands in the space usage formula. The transformation is based on a general method for algorithm design that exploits fixedpoint computation, incremental maintenance of invariants, and combinations of indexed and linked data structures. We apply the method to a number of analysis problems, some with improved algorithm complexities and all with greatly improved algorithm understanding and greatly simplified complexity analysis.
Viewing A Program Transformation System At Work
 Joint 6th International Conference on Programming Language Implementation and Logic Programming (PLILP) and 4th International conference on Algebraic and Logic Programming (ALP), volume 844 of Lecture Notes in Computer Science
, 1994
"... How to decrease labor and improve reliability in the development of efficient implementations of nonnumerical algorithms and labor intensive software is an increasingly important problem as the demand for computer technology shifts from easier applications to more complex algorithmic ones; e.g., opt ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
How to decrease labor and improve reliability in the development of efficient implementations of nonnumerical algorithms and labor intensive software is an increasingly important problem as the demand for computer technology shifts from easier applications to more complex algorithmic ones; e.g., optimizing compilers for supercomputers, intricate data structures to implement efficient solutions to operations research problems, search and analysis algorithms in genetic engineering, complex software tools for workstations, design automation, etc. It is also a difficult problem that is not solved by current CASE tools and software management disciplines, which are oriented towards data processing and other applications, where the implementation and a prediction of its resource utilization follow more directly from the specification. Recently, Cai and Paige reported experiments suggesting a way to implement nonnumerical algorithms in C at a programming rate (i.e., source lines per second) t...
Mechanical Translation of Set Theoretic Problem Specifications Into Efficient RAM Code  A Case Study
 Proc. EUROCAL 85
, 1985
"... This paper illustrates a fully automatic topdown approach to program development in which formal problem specifications are mechanically translated into efficient RAM code. This code is guaranteed to be totally correct and an upper bound on its worst case asymptotic running time is automatically de ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
This paper illustrates a fully automatic topdown approach to program development in which formal problem specifications are mechanically translated into efficient RAM code. This code is guaranteed to be totally correct and an upper bound on its worst case asymptotic running time is automatically determined. The user is only required to supply the system with a formal problem specification, and is relieved of all responsibilities in the rest of the program development process. These results are obtained, in part, by greatly restricting the system to handle a class of determinate, set theoretic, tractable problems. The most essential transformational techniques that are used are fixed point iteration, finite differencing, and data structure selection. Rudimentary forms of these techniques have been implemented and used effectively in the RAPTS transformational programming system. This paper explains the conceptual underpinnings of our approach by considering the problem of attribute closure for relational databases and systematically deriving a program that implements a linear time solution. 1.
Type Analysis and Data Structure Selection
, 1991
"... Schwartz et al. described an optimization to implement builtin abstract types such as sets and maps with efficient data structures. Their transformation rests on the discovery of finite universal sets, called bases, to be used for avoiding data replication and for creating aggregate data structures ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Schwartz et al. described an optimization to implement builtin abstract types such as sets and maps with efficient data structures. Their transformation rests on the discovery of finite universal sets, called bases, to be used for avoiding data replication and for creating aggregate data structures that implement associative access by simpler cursor or pointer access. The SETL implementation used global analysis similar to classical dataflow for typings and for set inclusion and membership relationships to determine bases. However, the optimized data structures selected by this optmization did not include a primitive linked list or array, and all optimized data structures retained some degree of hashing. Hence, this heuristic approach did not guarantee a uniform improvement in performance over the use of default representations. The analysis was complicated by SETL's imperative style, weak typing, and low level control structures. The implemented optimizer was large (about 20,000 line...
Program Derivation With Verified Transformations  A Case Study
, 1995
"... A program development methodology based on verified program transformations is described and illustrated through derivations of a high level bisimulation algorithm and an improved minimumstate DFA algorithm. Certain doubts that were raised about the correctness of an initial paperandpencil deriva ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
A program development methodology based on verified program transformations is described and illustrated through derivations of a high level bisimulation algorithm and an improved minimumstate DFA algorithm. Certain doubts that were raised about the correctness of an initial paperandpencil derivation of the DFA minimizationalgorithm were laid to rest by machinechecked formal proofs of the most difficult derivational steps. Although the protracted labor involved in designing and checking these proofs was almost overwhelming, the expense was somewhat offset by a successful reuse of major portions of these proofs. In particular, the DFA minimization algorithm is obtained by specializing and then extending the last step in the derivation of the high level bisimulation algorithm. Our experience suggests that a major focus of future research should be aimed towards improving the technology of machine checkable proofs  their construction, presentation, and reuse. This paper demonstrat...
Principled Strength Reduction
 Algorithmic Languages and Calculi
, 1996
"... This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Thre ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Three general optimizations are performed as part of the adoption; they systematically handle initializations, termination conditions, and final return values on exits of loops. These optimizations are either omitted, or done in implicit, limited, or ad hoc ways in previous methods. The new approach generalizes classical loop optimization techniques, notably strength reduction, in optimizing compilers, and it unifies and systematizes various optimization strategies in transformational programming. Such principled strength reduction performs drastic program efficiency improvement via incrementalization and appreciably reduces code size via associated optimizations. We give examples where this app...