Results 1  10
of
10
On Jonesoptimal specialization for strongly typed languages
 Semantics, Applications, and Implementation of Program Generation, LNCS 1924
, 2000
"... Abstract. The phrase \optimal program specialization " was de ned by Jones et al. in 1993 to capture the idea of a specializer being strong enough to remove entire layers of interpretation. As it has become clear that it does not imply \optimality " in the everyday meaning of the word, we ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
Abstract. The phrase \optimal program specialization " was de ned by Jones et al. in 1993 to capture the idea of a specializer being strong enough to remove entire layers of interpretation. As it has become clear that it does not imply \optimality " in the everyday meaning of the word, we propose to rename the concept \Jonesoptimality". We argue that the 1993 de nition of Jonesoptimality is in principle impossible to ful l for strongly typed languages due to necessary encodings on the inputs and outputs of a welltyped selfinterpreter. We propose a technical correction of the de nition which allows Jonesoptimality to remain a meaningful concept for typed languages. We extend recent work by Hughes and by Taha and Makholm on the longunsolved problem of Jonesoptimal specialization for strongly typed languages. The methods of Taha and Makholm are enhanced to allow \almost optimal " results when a selfinterpreter is specialized to a typeincorrect program; how to do this has been an open problem since 1987. Neither Hughes ' nor Taha{Makholm's methods are by themselves sucient for Jonesoptimal specialization when the language contains primitive operations that produce or consume complex data types. A simple postprocess is proposed to solve the problem. An implementation of the proposed techniques has been produced and used for the rst successful practical experiments with truly Jonesoptimal specialization for strongly typed languages.
Type Specialisation for the lambdacalculus, or A New Paradigm for Partial Evaluation based on Type Inference
"... this paper is to propagate static information via residual types. For example, when we specialise a static integer the residual expression is a dummy value, but the residual type tells us which static value it represents: 3 : int ,! ffl : 3 Here `3' is a type with only one element, namely ffl, and w ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
this paper is to propagate static information via residual types. For example, when we specialise a static integer the residual expression is a dummy value, but the residual type tells us which static value it represents: 3 : int ,! ffl : 3 Here `3' is a type with only one element, namely ffl, and which therefore has just the same elements as 4, 5, 6 etc.  but which carries different static information. All expressions, even `dynamic' ones, carry static information in the form of a residual type. For purely dynamic expressions this is just an ordinary type, for example 3 : int ,! 3 : int
Constructor specialisation for Haskell programs
, 2007
"... Userdefined data types, patternmatching, and recursion are ubiquitous features of Haskell programs. Sometimes a function is called with arguments that are statically known to already be in constructor form, so that the work of patternmatching is wasted. Even worse, the argument is sometimes fres ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Userdefined data types, patternmatching, and recursion are ubiquitous features of Haskell programs. Sometimes a function is called with arguments that are statically known to already be in constructor form, so that the work of patternmatching is wasted. Even worse, the argument is sometimes freshlyallocated, only to be immediately decomposed by the function. In this paper we describe a simple, modular transformation that specialises recursive functions according to their argument “shapes”. We show that such a transformation has a simple, modular implementation, and that it can be extremely effective in practice, eliminating both patternmatching and heap allocation. We describe our implementation of this constructor specialisation transformation in the Glasgow Haskell Compiler, and give measurements of its effectiveness.
Inherited limits
 In Partial Evaluation: Practice and Theory
, 1999
"... Abstract. We study the evolution of partial evaluators over the past fifteen years from a particular perspective: The attempt to prevent structural bounds in the original programs from imposing limits on the structure of residual programs. It will often be the case that a language allows unbounded n ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. We study the evolution of partial evaluators over the past fifteen years from a particular perspective: The attempt to prevent structural bounds in the original programs from imposing limits on the structure of residual programs. It will often be the case that a language allows unbounded numbers or sizes of particular features, but each program (being finite) will only have a finite number or size of these features. If the residual programs cannot overcome the bounds given in the original program, that can be seen as a weakness in the partial evaluator, as it potentially limits the effectiveness of residual programs. We show how historical developments in partial evaluators have removed inherited limits, and suggest how this principle can be used as a guideline for further development. 1
Evolution of partial evaluators: removing inherited limits
 Partial Evaluation. Proceedings, LNCS 1110, 303–321
, 1996
"... Abstract. We show the evolution of partial evaluators over the past ten years from a particular perspective: the attempt to remove limits on the structure of residual programs that are inherited from structural bounds in the original programs. It will often be the case that a language allows an unbo ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. We show the evolution of partial evaluators over the past ten years from a particular perspective: the attempt to remove limits on the structure of residual programs that are inherited from structural bounds in the original programs. It will often be the case that a language allows an unbounded number or size of a particular features, but each program (being nite) will only have a nite number or size of these features. If the residual programs cannot overcome the bounds given in the original program, that can be seen as a weakness in the partial evaluator, as it potentially limits the e ectiveness of residual programs. The inherited limits are best observed through specializing a selfinterpreter and examining the object programs produced by specialisation of this. We show how historical developments in partial evaluators gradually remove inherited limits, and suggest how this principle can be used as a guideline for further development. 1
A Simple Solution to Type Specialization (Extended Abstract)
 Larsen, Skyum, & Winskel (eds), Proceedings of the 25th international colloquium on automata, languages, and programming (ICALP). Lecture Notes in Computer Science
, 1998
"... BRICS Report Series RS981 ISSN 09090878 January 1998 Copyright c fl 1998, BRICS, Department of Computer Science University of Aarhus. All rights reserved. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
BRICS Report Series RS981 ISSN 09090878 January 1998 Copyright c fl 1998, BRICS, Department of Computer Science University of Aarhus. All rights reserved.
Removing Value Encoding using Alternative Values in Partial Evaluation of StronglyTyped Languages
 In Nielson [?]. LNCS
, 1994
"... There is a main difference between a program which is interpreted by an interpreter written in a stronglytyped language and a compiled version. Such an interpreter usually uses a universal domain for the values it manipulates. A value encoding is necessary. A compiled program works directly on valu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
There is a main difference between a program which is interpreted by an interpreter written in a stronglytyped language and a compiled version. Such an interpreter usually uses a universal domain for the values it manipulates. A value encoding is necessary. A compiled program works directly on values. A layer of interpretation for value representation is inserted. On the other hand, a way to derive automatically a compiler from an interpreter is to use a partial evaluator applied to the interpreter and the interpreted program. This leads to a problem when we want that this technique removes all the layer of interpretation because value encoding must disappear. It is not the case for conventional partial evaluator. This paper proposes to introduce a new domain for partial evaluators called alternative values and a new algorithm of specialization (based on events) which can solve this problem of removing value encoding. We conclude by reporting a successful specialization...
Type Specialisation of a Subset of Haskell
, 1997
"... John Hughes presents a new method for performing partial evaluation in [Hug96b]. The method is called type specialisation and functions much like a type checker. It infers its results. The static content of every subexpression of the source program is derived. By letting this static content be t ..."
Abstract
 Add to MetaCart
John Hughes presents a new method for performing partial evaluation in [Hug96b]. The method is called type specialisation and functions much like a type checker. It infers its results. The static content of every subexpression of the source program is derived. By letting this static content be the type of the expression, new specialised types are produced. The types are then propagated through the specialisation process independently of how code is residualised. This enables strong specialisations. This paper describes an implementation with a subset of Haskell as both the source and the residual (target) language. It is capable of handling Haskell's data types, including specialising constructors. One problem with Hughes' specialiser was that it could not handle static tuples and projections on them properly. Here, a solution is presented; a postphase called projection unfolding. The method is capable of removing all static tuples provided neither the type of the program...
Director
, 2006
"... Program specialization is a form automatic program generation that produces different versions of a given general source program, each of them specialized to particular known data. For example, the recursive function power, if the exponent is known to be 3, can be specialized to a more efficient (no ..."
Abstract
 Add to MetaCart
Program specialization is a form automatic program generation that produces different versions of a given general source program, each of them specialized to particular known data. For example, the recursive function power, if the exponent is known to be 3, can be specialized to a more efficient (nonrecursive) function λx.x · x · x, and similarly for other exponents. Type specialization [Hughes, 1996b; Hughes, 1996a; Hughes, 1998] is a form of program specialization based on type inference. Both the source program and its type are specialized to a residual program and a residual type. Principal type specialization [Martínez López and Hughes, 2002; Martínez López, 2005] is a detailed formulation to this system based on the theory of Qualified Types [Jones, 1994]. It has the property of producing principal specializations: for each specializable source expression and type, a residual expression and type can be generated such that they are more general than any other valid specialization, and all of them can be obtained from it by a notion of instantiation. An important notion in any specialization system is that of polyvariance, a feature allowing a single source expression to be specialized to many residual results. Polyvariance can be