Results 11  20
of
48
A Comparative Revisitation of Some Program Transformation Techniques
 Partial Evaluation, Int'l Seminar, Dagstuhl
, 1996
"... . We revisit the main techniques of program transformation which are used in partial evaluation, mixed computation, supercompilation, generalized partial computation, rulebased program derivation, program specialization, compiling control, and the like. We present a methodology which underlines the ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
. We revisit the main techniques of program transformation which are used in partial evaluation, mixed computation, supercompilation, generalized partial computation, rulebased program derivation, program specialization, compiling control, and the like. We present a methodology which underlines these techniques as a `common pattern of reasoning' and explains the various correspondences which can be established among them. This methodology consists of three steps: i) symbolic computation, ii) search for regularities, and iii) program extraction. We also discuss some control issues which occur when performing these steps. 1 Introduction During the past years researchers working in various areas of program transformation, such as partial evaluation, mixed computation, supercompilation, generalized partial computation, rulebased program derivation, program specialization, and compiling control, have been using very similar techniques for the development and derivation of programs. Unfor...
Loop optimization for aggregate array computations
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows both analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional information not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. Incrementalizing aggregate array computations produces drastic program speedup compared to previous optimizations. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays.
Eliminating dead code on recursive data
 Science of Computer Programming
, 1999
"... Abstract. This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, an ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Abstract. This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, and we formulate the analysis as computing liveness patterns at all program points based on program semantics. This analysis yields a most precise liveness pattern for the data at each program point, which is signi cantly more precise than results from previous methods. The analysis algorithm takes cubic time in terms of the size of the program in the worst case but is very e cient in practice, as shown by our prototype implementation. The analysis results are used to identify and eliminate dead code. The general framework for representing and analyzing properties of recursive data structures using general regular tree grammars applies to other analyses as well. 1
Efficient Optimization of Iterative Queries
 In Fourth International Workshop on Database Programming Languages
, 1993
"... This paper presents a new query algebra based on fold iterations that facilitates database implementation. An algebraic normalization algorithm is introduced that reduces any program expressed in this algebra to a canonical form that generates no intermediate data structures and has no more nested i ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
This paper presents a new query algebra based on fold iterations that facilitates database implementation. An algebraic normalization algorithm is introduced that reduces any program expressed in this algebra to a canonical form that generates no intermediate data structures and has no more nested iterations than the initial program. Given any inductive data type, our system can automatically synthesize the definition of the fold operator that traverses instances of this type, and, more importantly, it can produce the necessary transformations for optimizing expressions involving this fold operator. Database implementation in our framework is controlled by userdefined mappings from abstract types to physical structures. The optimizer uses this information to translate abstract programs and queries into concrete algorithms that conform to the type transformation. Database query optimization can be viewed as a search over the reduced space of all canonical forms which are equivalent to t...
Active Patterns
 In 8th Int. Workshop on Implementation of Functional Languages, LNCS 1268
, 1996
"... . Active patterns apply preprocessing functions to data type values before they are matched. This is of use for unfree data types where more than one representation exists for an abstract value: in many cases there is a specific representation for which function definitions become very simple, and a ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
. Active patterns apply preprocessing functions to data type values before they are matched. This is of use for unfree data types where more than one representation exists for an abstract value: in many cases there is a specific representation for which function definitions become very simple, and active patterns just allow to assume this specific representation in function definitions. We define the semantics of active patterns and describe their implementation. 1 Introduction Pattern matching is a wellappreciated concept of (functional) programming. It contributes to concise function definitions by implicitly decomposing data type values. In many cases, a large part of a function's definition is only needed to prepare for recursive application and to finally lead to a base case for which the definition itself is rather simple. Such function definitions could be simplified considerably if these preparatory computations could be factorized and given in a separate place. Recognizing t...
Constraints to Stop HigherOrder Deforestation
 In 24th ACM Symposium on Principles of Programming Languages
, 1997
"... Wadler's deforestation algorithm eliminates intermediate data structures from functional programs. To be suitable for inclusion in a compiler, it must terminate on all programs. Several techniques to ensure termination of deforestation on all firstorder programs are known, but a technique for highe ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Wadler's deforestation algorithm eliminates intermediate data structures from functional programs. To be suitable for inclusion in a compiler, it must terminate on all programs. Several techniques to ensure termination of deforestation on all firstorder programs are known, but a technique for higherorder programs was only recently introduced by Hamilton, and elaborated and implemented in the Glasgow Haskell compiler by Marlow. We introduce a new technique for ensuring termination of deforestation on all higherorder programs that allows useful transformation steps prohibited in Hamilton's and Marlowe's techniques. 1 Introduction Lazy, higherorder, functional programming languages lend themselves to a certain style of programming which uses intermediate data structures [28]. Example 1 Consider the following program. letrec a = x; y:case x of [] ! y (h : t) ! h : a t y in u; v; w: a (a u v) w The term u; v; w:a (a u v) w appends the three lists u, v, and w. Appending u and v ...
Integer Constraints to Stop Deforestation
, 1996
"... . Deforestation is a transformation of functional programs to remove intermediate data structures. It is based on outermost unfolding of function calls where folding occurs when unfolding takes place within the same nested function call. Since unrestricted unfolding may encounter arbitrarily man ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. Deforestation is a transformation of functional programs to remove intermediate data structures. It is based on outermost unfolding of function calls where folding occurs when unfolding takes place within the same nested function call. Since unrestricted unfolding may encounter arbitrarily many terms, a termination analysis has to determine those subterms where unfolding is possibly dangerous. We show that such an analysis can be obtained from a control flow analysis by an extension with integer constraints  essentially at no loss in efficiency. 1 Introduction The key idea of flow analysis for functional languages is to define an abstract meaning in terms of program points , i.e., subexpressions of the program possibly evaluated during program execution [Pa95]. Such analysises have been invented for tasks like type recovery [Sh91], binding time analysis [Co93], or safety analysis [PS95]. Conceptually, these are closely related to A. Deutsch's storebased alias analysis [D...
Dependence Analysis for Recursive Data
 In Proceedings of the IEEE 1998 International Conference on Computer Languages
, 1998
"... This paper describes a general and powerful method for dependence analysis in the presence of recursive data constructions. The particular analysis presented is for identifying partially dead recursive data, but the general framework for representing and manipulating recursive substructures applies ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
This paper describes a general and powerful method for dependence analysis in the presence of recursive data constructions. The particular analysis presented is for identifying partially dead recursive data, but the general framework for representing and manipulating recursive substructures applies to all dependence analyses. The method uses projections based on general regular tree grammars extended with notions of live and dead, and defines the analysis as mutually recursive grammar transformers. To guarantee that the analysis terminates, we use carefully designed approximations. We describe how to approximate argument projections with grammars that can be computed without iterating and how to approximate resulting projections with a widening operation. We design an approximation operation that combines two grammars to give the most precise deterministic result possible. All grammar operations used in the analysis have efficient algorithms. The overall analysis yields significantly m...
A Transformation Method for DynamicSized Tabulation
, 1995
"... Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8, 9], a safe (t ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8, 9], a safe (terminating) fold/unfold transformation algorithm was developed for some classes of functions which are guaranteed to be successfully tupled. However, these classes of functions currently use staticsized tables for eliminating the redundant calls. As shown by Richard Bird in [3], there are also other classes of programs whose redundant calls could only be eliminated by using dynamicsized tabulation. This paper proposes a new solution to dynamicsized tabulation by an extension to the tupling tactic. Our extension uses lambda abstractions which can be viewed as either dynamicsized tables or applications of the higherorder generalisation technique to facilitate tupling. Significant speedups could be obtained after the transformed programs were vectorised, as confirmed by experiment.
Calculating Software Generators from Solution Specifications
 In TAPSOFT'95, volume 915 of LNCS
, 1994
"... Software application generators can eliminate many of the technical aspects of programming for most computer users. We have developed a uniform approach to the design of program generators, based upon a simple ideaprovide a declarative specification language for each application domain and give i ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Software application generators can eliminate many of the technical aspects of programming for most computer users. We have developed a uniform approach to the design of program generators, based upon a simple ideaprovide a declarative specification language for each application domain and give it a computable, denotational semantics. To make this idea practical, however, requires a comprehensive system for transforming and translatiing expressions in the higherorder functional operators of the semantics formulation into a reasonably efficient implementation expressed in a firstorder, imperative programming language. This paper describes the system we have built to accomplish this. The technique and the system have been applied to produce a generator for modules that validate and translate messages sent from a peripheral sensor to a central controller. The input to a generator is a specification of the data formats and data constraints that characterize a message. The output is an...