Results 11  20
of
26
Towards higherlevel supercompilation
 SECOND INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA
, 2010
"... We show that the power of supercompilation can be increased by constructing a hierarchy of supercompilers, in which a lowerlevel supercompiler is used by a higherlevel one for proving improvement lemmas. The lemmas thus obtained are used to transform expressions labeling nodes in process trees, in ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
We show that the power of supercompilation can be increased by constructing a hierarchy of supercompilers, in which a lowerlevel supercompiler is used by a higherlevel one for proving improvement lemmas. The lemmas thus obtained are used to transform expressions labeling nodes in process trees, in order to avoid premature generalizations. Such kind of supercompilation, based on a combination of several metalevels, is called higherlevel supercompilation (to differentiate it from higherorder supercompilation related to transforming higherorder functions). Higherlevel supercompilation may be considered as an application of a more general principle of metasystem transition.
Deriving Analysers By Folding/unfolding of Natural Semantics and a Case Study: Slicing
, 1998
"... : We consider specications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property dened by recurrence on derivation trees. A recursive denition of a dynamic analyser can be obtained by fold/unfold program transf ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
: We consider specications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property dened by recurrence on derivation trees. A recursive denition of a dynamic analyser can be obtained by fold/unfold program transformation combined with deforestation. We apply our framework to the derivation of a slicing analysis for a logic programming language. Keywords: systematic derivation, program transformation, natural semantics, proof tree, slicing analysis. (R#sum# : tsvp) Unite de recherche INRIA Rennes IRISA, Campus universitaire de Beaulieu, 35042 RENNES Cedex (France) Telephone : 02 99 84 71 00  International : +33 2 99 84 71 00 Telecopie : 02 99 84 71 71  International : +33 2 99 84 71 71 D#rivation d'analyseurs # partir d'une s#mantique naturelle par pliage/d#pliage, application # l'analyse d'#lagage R#sum# : Nous consid#rons la sp#cication d'un analyseur comme la composition de deux fonctio...
HigherOrder Expression Procedures
 In Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and SemanticsBased Program Manipulation (PEPM
, 1995
"... We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higherorder nonstrict functional language. An expression procedure is a generalised procedure construct providing a contextually specialised definition. The addition of expression ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higherorder nonstrict functional language. An expression procedure is a generalised procedure construct providing a contextually specialised definition. The addition of expression procedures thereby facilitates the manipulation and specialisation of programs. In the expression procedure approach, programs thus generalised are transformed by means of three key transformation rules: composition, application and abstraction. Arguably, the most notable, yet most overlooked feature of the expression procedure approach to transformation, is that the transformation rules always preserve the meaning of programs. This is in contrast to the unfoldfold transformation rules of Burstall and Darlington. In Scherlis' thesis, this distinguishing property was shown to hold for a strict firstorder language. Rules for callbyname evaluation order were stated but not proved correct....
Formal Efficiency Analysis for Tree Transducer Composition
, 2004
"... We study the question of efficiency improvement or deterioration for a semanticspreserving program transformation technique for (lazy) functional languages, based on composition of restricted macro tree transducers. By annotating programs to reflect the intensional property ``computation time'' exp ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We study the question of efficiency improvement or deterioration for a semanticspreserving program transformation technique for (lazy) functional languages, based on composition of restricted macro tree transducers. By annotating programs to reflect the intensional property ``computation time'' explicitly in the computed output and by manipulating such annotations, we formally prove syntactic conditions under which the composed program is guaranteed to be not less efficient than the original program with respect to the number of callbyname reduction steps required to reach normal form. Under additional conditions the guarantee also holds for callbyneed semantics. The criteria developed can be checked automatically and efficiently, and thus are suitable for integration into an optimizing compiler.
Flattening is an Improvement
, 2000
"... ) James Riely 1 and Jan Prins 2 1 DePaul University 2 University of North Carolina at Chapel Hill Abstract. Flattening is a program transformation that eliminates nested parallel constructs, introducing flat parallel (vector) operations in their place. We define a sufficient syntactic conditio ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
) James Riely 1 and Jan Prins 2 1 DePaul University 2 University of North Carolina at Chapel Hill Abstract. Flattening is a program transformation that eliminates nested parallel constructs, introducing flat parallel (vector) operations in their place. We define a sufficient syntactic condition for the correctness of flattening, providing a static approximation of Blelloch's "containment". This is acheived using a typing system that tracks the control flow of programs. Using a weak improvement preorder, we then show that the flattening transformations are intensionally correct for all welltyped programs. 1 Introduction The study of program transformations has largely been concerned with functional correctness, i.e. whether program transformations preserve program meaning. However, if we include an execution costmodel as part of the programming language semantics, then we can ask whether program transformations additionally preserve or "improve" program performance. One progra...
Supercompiler HOSC: proof of correctness
, 2010
"... The paper presents the proof of correctness of an experimental supercompiler HOSC dealing with higherorder functions. ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
The paper presents the proof of correctness of an experimental supercompiler HOSC dealing with higherorder functions.
A program specialization relation based on supercompilation and its properties
 FIRST INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA (META 2008)
, 2008
"... An inputoutput relation for a wide class of program specializers for a simple functional language in the form of Natural Semantics inference rules is presented. It covers polygenetic specialization, which includes deforestation and supercompilation, and generalizes the author’s previous paper on sp ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
An inputoutput relation for a wide class of program specializers for a simple functional language in the form of Natural Semantics inference rules is presented. It covers polygenetic specialization, which includes deforestation and supercompilation, and generalizes the author’s previous paper on specification of monogenetic specialization like partial evaluation and restricted supercompilation. The specialization relation expresses the idea of what is to be a specialized program, avoiding as much as possible the details of how a specializer builds it. The relation specification follows the principles of Turchin’s supercompilation and captures its main notions: configuration, driving, generalization of a configuration, splitting a configuration, as well as collapsedjungle driving. It is virtually a formal definition of supercompilation abstracting away the most sophisticated parts of supercompilers— strategies of configuration analysis. Main properties of the program specialization relation—idempotency, transitivity, soundness, completeness, correctness—are formulated and discussed.
A Computational Formalization for Partial Evaluation (Extended Version)
, 1996
"... We formalize a partial evaluator for Eugenio Moggi's computational metalanguage. This formalization gives an evaluationorder independent view of bindingtime analysis and program specialization, including a proper treatment of call unfolding, and enables us to express the essence of "controlba ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We formalize a partial evaluator for Eugenio Moggi's computational metalanguage. This formalization gives an evaluationorder independent view of bindingtime analysis and program specialization, including a proper treatment of call unfolding, and enables us to express the essence of "controlbased bindingtime improvements" for let expressions. Specifically,
Lambda Calculi and Linear Speedups
 THE ESSENCE OF COMPUTATION: COMPLEXITY, ANALYSIS, TRANSFORMATION, NUMBER 2566 IN LECTURE NOTES IN COMPUTER SCIENCE
, 2002
"... The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (#x.M)V M [ V / x ]whereV is restricted to be a value rather than an arb ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (#x.M)V M [ V / x ]whereV is restricted to be a value rather than an arbitrary term. This paper
A Simple Supercompiler Formally Verified in Coq
 SECOND INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA (META 2010)
, 2010
"... We study an approach for verifying the correctness of a simplified supercompiler in Coq. While existing supercompilers are not very big in size, they combine many different program transformations in intricate ways, so checking the correctness of their implementation poses challenges. The presented ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study an approach for verifying the correctness of a simplified supercompiler in Coq. While existing supercompilers are not very big in size, they combine many different program transformations in intricate ways, so checking the correctness of their implementation poses challenges. The presented method relies on two important technical features to achieve a compact and modular formalization: first, a very limited object language; second, decomposing the supercompilation process into many subtransformations, whose correctness can be checked independently. In particular, we give separate correctness proofs for two key parts of driving – normalization and positive information propagation – in the context of a nonTuringcomplete expression sublanguage. Though our supercompiler is currently limited, its formal correctness proof can give guidance for verifying more realistic implementations.