Results 11  20
of
36
Integer Constraints to Stop Deforestation
, 1996
"... . Deforestation is a transformation of functional programs to remove intermediate data structures. It is based on outermost unfolding of function calls where folding occurs when unfolding takes place within the same nested function call. Since unrestricted unfolding may encounter arbitrarily man ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
. Deforestation is a transformation of functional programs to remove intermediate data structures. It is based on outermost unfolding of function calls where folding occurs when unfolding takes place within the same nested function call. Since unrestricted unfolding may encounter arbitrarily many terms, a termination analysis has to determine those subterms where unfolding is possibly dangerous. We show that such an analysis can be obtained from a control flow analysis by an extension with integer constraints  essentially at no loss in efficiency. 1 Introduction The key idea of flow analysis for functional languages is to define an abstract meaning in terms of program points , i.e., subexpressions of the program possibly evaluated during program execution [Pa95]. Such analysises have been invented for tasks like type recovery [Sh91], binding time analysis [Co93], or safety analysis [PS95]. Conceptually, these are closely related to A. Deutsch's storebased alias analysis [D...
Supercompiler HOSC: proof of correctness
, 2010
"... The paper presents the proof of correctness of an experimental supercompiler HOSC dealing with higherorder functions. ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
(Show Context)
The paper presents the proof of correctness of an experimental supercompiler HOSC dealing with higherorder functions.
Deriving Analysers By Folding/unfolding of Natural Semantics and a Case Study: Slicing
, 1998
"... : We consider specications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property dened by recurrence on derivation trees. A recursive denition of a dynamic analyser can be obtained by fold/unfold program transf ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
: We consider specications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property dened by recurrence on derivation trees. A recursive denition of a dynamic analyser can be obtained by fold/unfold program transformation combined with deforestation. We apply our framework to the derivation of a slicing analysis for a logic programming language. Keywords: systematic derivation, program transformation, natural semantics, proof tree, slicing analysis. (R#sum# : tsvp) Unite de recherche INRIA Rennes IRISA, Campus universitaire de Beaulieu, 35042 RENNES Cedex (France) Telephone : 02 99 84 71 00  International : +33 2 99 84 71 00 Telecopie : 02 99 84 71 71  International : +33 2 99 84 71 71 D#rivation d'analyseurs # partir d'une s#mantique naturelle par pliage/d#pliage, application # l'analyse d'#lagage R#sum# : Nous consid#rons la sp#cication d'un analyseur comme la composition de deux fonctio...
Positive supercompilation for a higherorder callbyvalue language
 In POPL ’09: Proceedings of the 36th annual ACM SIGPLANSIGACT symposium on Principles of programming languages
, 2009
"... Vol. 6 (3:5) 2010, pp. 1–39 www.lmcsonline.org ..."
(Show Context)
Formal Efficiency Analysis for Tree Transducer Composition
, 2004
"... We study the question of efficiency improvement or deterioration for a semanticspreserving program transformation technique for (lazy) functional languages, based on composition of restricted macro tree transducers. By annotating programs to reflect the intensional property ``computation time' ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
We study the question of efficiency improvement or deterioration for a semanticspreserving program transformation technique for (lazy) functional languages, based on composition of restricted macro tree transducers. By annotating programs to reflect the intensional property ``computation time'' explicitly in the computed output and by manipulating such annotations, we formally prove syntactic conditions under which the composed program is guaranteed to be not less efficient than the original program with respect to the number of callbyname reduction steps required to reach normal form. Under additional conditions the guarantee also holds for callbyneed semantics. The criteria developed can be checked automatically and efficiently, and thus are suitable for integration into an optimizing compiler.
HigherOrder Expression Procedures
 In Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and SemanticsBased Program Manipulation (PEPM
, 1995
"... We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higherorder nonstrict functional language. An expression procedure is a generalised procedure construct providing a contextually specialised definition. The addition of expression ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higherorder nonstrict functional language. An expression procedure is a generalised procedure construct providing a contextually specialised definition. The addition of expression procedures thereby facilitates the manipulation and specialisation of programs. In the expression procedure approach, programs thus generalised are transformed by means of three key transformation rules: composition, application and abstraction. Arguably, the most notable, yet most overlooked feature of the expression procedure approach to transformation, is that the transformation rules always preserve the meaning of programs. This is in contrast to the unfoldfold transformation rules of Burstall and Darlington. In Scherlis' thesis, this distinguishing property was shown to hold for a strict firstorder language. Rules for callbyname evaluation order were stated but not proved correct....
Reachability Analysis in Verification via Supercompilation
, 2008
"... We present an approach to verification of parameterized systems, which is based on program transformation technique known as supercompilation. In this approach the statements about safety properties of a system to be verified are translated into the statements about properties of the program that si ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We present an approach to verification of parameterized systems, which is based on program transformation technique known as supercompilation. In this approach the statements about safety properties of a system to be verified are translated into the statements about properties of the program that simulates and tests the system. The supercompilation is used then to establish the required properties of the program. In this paper we show that reachability analysis performed by supercompilation can be seen as the proof of a correctness condition by induction. We formulate suitable induction principles and proof strategies and illustrate their use by examples of verification of parameterized protocols.
Flattening is an Improvement
, 2000
"... ) James Riely 1 and Jan Prins 2 1 DePaul University 2 University of North Carolina at Chapel Hill Abstract. Flattening is a program transformation that eliminates nested parallel constructs, introducing flat parallel (vector) operations in their place. We define a sufficient syntactic conditio ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
) James Riely 1 and Jan Prins 2 1 DePaul University 2 University of North Carolina at Chapel Hill Abstract. Flattening is a program transformation that eliminates nested parallel constructs, introducing flat parallel (vector) operations in their place. We define a sufficient syntactic condition for the correctness of flattening, providing a static approximation of Blelloch's "containment". This is acheived using a typing system that tracks the control flow of programs. Using a weak improvement preorder, we then show that the flattening transformations are intensionally correct for all welltyped programs. 1 Introduction The study of program transformations has largely been concerned with functional correctness, i.e. whether program transformations preserve program meaning. However, if we include an execution costmodel as part of the programming language semantics, then we can ask whether program transformations additionally preserve or "improve" program performance. One progra...
A Simple Supercompiler Formally Verified in Coq
 SECOND INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA (META 2010)
, 2010
"... We study an approach for verifying the correctness of a simplified supercompiler in Coq. While existing supercompilers are not very big in size, they combine many different program transformations in intricate ways, so checking the correctness of their implementation poses challenges. The presented ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We study an approach for verifying the correctness of a simplified supercompiler in Coq. While existing supercompilers are not very big in size, they combine many different program transformations in intricate ways, so checking the correctness of their implementation poses challenges. The presented method relies on two important technical features to achieve a compact and modular formalization: first, a very limited object language; second, decomposing the supercompilation process into many subtransformations, whose correctness can be checked independently. In particular, we give separate correctness proofs for two key parts of driving – normalization and positive information propagation – in the context of a nonTuringcomplete expression sublanguage. Though our supercompiler is currently limited, its formal correctness proof can give guidance for verifying more realistic implementations.
Lambda Calculi and Linear Speedups
 THE ESSENCE OF COMPUTATION: COMPLEXITY, ANALYSIS, TRANSFORMATION, NUMBER 2566 IN LECTURE NOTES IN COMPUTER SCIENCE
, 2002
"... The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (#x.M)V M [ V / x ]whereV is restricted to be a value rather than an arb ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (#x.M)V M [ V / x ]whereV is restricted to be a value rather than an arbitrary term. This paper