Results 1  10
of
19
Efficient analyses for realistic offline partial evaluation
 Journal of Functional Programming
, 1993
"... Based on Henglein’s efficient bindingtime analysis for the lambda calculus (with constants and “fix”) [Hen91], we develop four efficient analyses for use in the preprocessing phase of Similix, a selfapplicable partial evaluator for a higherorder subset of Scheme. The analyses developed in this pa ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
Based on Henglein’s efficient bindingtime analysis for the lambda calculus (with constants and “fix”) [Hen91], we develop four efficient analyses for use in the preprocessing phase of Similix, a selfapplicable partial evaluator for a higherorder subset of Scheme. The analyses developed in this paper are almostlinear in the size of the analysed program. (1) A flow analysis determines possible value flow between lambdaabstractions and function applications and between constructor applications and selector/predicate applications. The flow analysis is not particularly biased towards partial evaluation; the analysis corresponds to the closure analysis of [Bon91b]. (2) A (monovariant) bindingtime analysis distinguishes static from dynamic values; the analysis treats both higherorder functions and partially static data structures. (3) A new isused analysis, not present in [Bon91b], finds a nonminimal bindingtime annotation which is “safe ” in a certain way: a firstorder value may only become static if its result is “needed ” during specialization; this “poor man’s generalization ” [Hol88] increases termination of specialization. (4) Finally, an evaluationorder dependency analysis ensures that the order of sideeffects is preserved in the residual program. The four analyses are performed
FlowDirected Closure Conversion for Typed Languages
 In ESOP '00 [ESOP00
, 2000
"... This paper presents a new closure conversion algorithm for simplytyped languages. We have have implemented the algorithm as part of MLton, a wholeprogram compiler for Standard ML (SML). MLton first applies all functors and eliminates polymorphism by code duplication to produce a simplytyped progr ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
This paper presents a new closure conversion algorithm for simplytyped languages. We have have implemented the algorithm as part of MLton, a wholeprogram compiler for Standard ML (SML). MLton first applies all functors and eliminates polymorphism by code duplication to produce a simplytyped program. MLton then performs closure conversion to produce a firstorder, simplytyped program. In contrast to typical functional language implementations, MLton performs most optimizations on the firstorder language, after closure conversion. There are two notable contributions of our work: 1. The translation uses a general flowanalysis framework which includes OCFA. The types in the target language fully capture the results of the analysis. MLton uses the analysis to insert coercions to translate between different representations of a closure to preserve type correctness of the target language program. 2. The translation is practical. Experimental results over a range of benchmarks...
BindingTime Analysis for Mercury
 16th International Conference on Logic Programming, pages 500 { 514
, 1999
"... . In this paper, we describe a bindingtime analysis (BTA) for a statically typed and strongly moded pure logic programming language, in casu Mercury. Bindingtime analysis is the key concept in achieving oline program specialisation: the analysis starts from a description of the program's ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
. In this paper, we describe a bindingtime analysis (BTA) for a statically typed and strongly moded pure logic programming language, in casu Mercury. Bindingtime analysis is the key concept in achieving oline program specialisation: the analysis starts from a description of the program's input available for specialisation, and propagates this information throughout the program, deriving directives for when and how to perform specialisation. 1
Program Representation Size in an Intermediate Language with Intersection and Union Types
 In Proceedings of the Third Workshop on Types in Compilation (TIC 2000
, 2000
"... The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and ow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and ow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compiletime space costs that are potentially much greater than are incurred in TILs employing typelevel abstraction or quanti cation. In this paper, we present empirical data on the compiletime space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using suciently negrained ow analyses together with standard hashconsing techniques. The data also suggests that nonduplicating formulations of intersection (and union) types would not achieve signi cantly better space complexity.
HM(X) Type Inference is CLP(X) Solving
 UNDER CONSIDERATION FOR PUBLICATION IN J. FUNCTIONAL PROGRAMMING
"... The HM(X) system is a generalization of the Hindley/Milner system parameterized in the constraint domain X. Type inference is performed by generating constraints out of the program text which are then solved by the domain specific constraint solver X. The solver has to be invoked at the latest when ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The HM(X) system is a generalization of the Hindley/Milner system parameterized in the constraint domain X. Type inference is performed by generating constraints out of the program text which are then solved by the domain specific constraint solver X. The solver has to be invoked at the latest when type inference reaches a let node so that we can build a polymorphic type. A typical example of such an inference approach is Milner’s algorithm W. We formalize an inference approach where the HM(X) type inference problem is first mapped to a CLP(X) program. The actual type inference is achieved by executing the CLP(X) program. Such an inference approach supports the uniform construction of type inference algorithms and has important practical consequences when it comes to reporting type errors. The CLP(X) style inference system where X is defined by Constraint Handling Rules is implemented as part of the Chameleon system.
Space Issues in Compiling with Intersection and Union Types
, 2000
"... The CIL compiler for core Standard ML compiles whole programs using the CIL typed intermediate language with ow labels and intersection and union types. Flow labels embed flow information in the types and intersection and union types support precise polyvariant type and flow information, without the ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
The CIL compiler for core Standard ML compiles whole programs using the CIL typed intermediate language with ow labels and intersection and union types. Flow labels embed flow information in the types and intersection and union types support precise polyvariant type and flow information, without the use of typelevel abstraction or quantification. Compiletime representations of CIL types and terms are potentially large compared to those for similar types and terms in systems based on quantified types. The listingbased nature of intersection and union types, together with flow label annotations on types, contribute to the size of CIL types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce multiple representation conventions, but incurs a compiletime space cost. This paper presents empirical data on the compiletime space cos...
Breaking through the n 3 barrier: Faster object type inference. Theory and Practice of Object Systems
 4th Int’l Workshop on Foundations of ObjectOriented Languages (FOOL
, 1999
"... Abadi and Cardelli [AC96] have presented and investigated object calculi that model most objectoriented features found in actual objectoriented programming languages. The calculi are innate object calculi in that they are not based on λcalculus. They present a series of type systems for their calc ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abadi and Cardelli [AC96] have presented and investigated object calculi that model most objectoriented features found in actual objectoriented programming languages. The calculi are innate object calculi in that they are not based on λcalculus. They present a series of type systems for their calculi, four of which are firstorder. Palsberg [Pal95] has shown how typability in each one of these systems can be decided in time O(n 3), where n is the size of an untyped object expression, using an algorithm based on dynamic transitive closure. He also shows that each of the type inference problems is hard for polynomial time under logspace reductions. In this paper we show how we can break through the (dynamic) transitive closure bottleneck and improve each one of the four type inference problems from O(n 3) to the following time complexities: no subtyping subtyping w/o rec. types O(n) O(n2) with rec. types O(n log 2 n) O(n2) The key ingredient that lets us “beat ” the worstcase time complexity induced by using general dynamic transitive closure or similar algorithmic methods is that object subtyping is invariant: an object type is a subtype of a “shorter ” type with a subset of the field names if and only if the common fields have equal types. 1
Partial Evaluation for Program Analysis
, 1998
"... syntax of labeled CPS terms . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Sets induced by the source program . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Shivers's original 0CFA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 State passing 0CFA: ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
syntax of labeled CPS terms . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Sets induced by the source program . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Shivers's original 0CFA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 State passing 0CFA: functionalities . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Statepassing 0CFA: the equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Introducing timestamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.1 Types of IMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2 Syntax of IMP programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3 IMP predefined constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4 Typing rules for IMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.5 Interpretation of types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.6 Semantics of IMP language constructs . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.7 Interpretation of IMP constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.8 ICFA: program for computing the 0CFA . . . . . . . . . . . . . . . . . . . . . . . 20 5.1 Typing rule and semantics for the caseX construct . . . . . . . . . . . . . . . . . . 24 5.2 Call unfolding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.1 Example of residual function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.2 The ICFA # program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.3 ICFA ## program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....
Partial evaluation for constraintbased program analyses
, 1999
"... We report on a case study in the application of partial evaluation, initiated by the desire to speed up a constraintbased algorithm for controlflow analysis. We designed and implemented a dedicated partial evaluator, able to specialize the analysis wrt. a given constraint graph and thus remove the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We report on a case study in the application of partial evaluation, initiated by the desire to speed up a constraintbased algorithm for controlflow analysis. We designed and implemented a dedicated partial evaluator, able to specialize the analysis wrt. a given constraint graph and thus remove the interpretive overhead, and measured it with Feeley’s Scheme benchmarks. Even though the gain turned out to be rather limited, our investigation yielded valuable feed back in that it provided a better understanding of the analysis, leading us to (re)invent an incremental version. We believe this phenomenon to be a quite frequent spinoff from using partial evaluation, since the removal of interpretive overhead makes the flow of control more explicit and hence pinpoints sources of inefficiency. Finally, we observed that partial evaluation in our case yields such regular, lowlevel specialized programs that it begs for runtime code generation. 1