Results 1  10
of
12
Dynamic Typing in Polymorphic Languages
 JOURNAL OF FUNCTIONAL PROGRAMMING
, 1995
"... There are situations in programmingwhere some dynamic typing is needed, even in the presence of advanced static type systems. We investigate the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of ..."
Abstract

Cited by 97 (2 self)
 Add to MetaCart
There are situations in programmingwhere some dynamic typing is needed, even in the presence of advanced static type systems. We investigate the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of system F ), implicit polymorphism (in the style of ML), abstract data types, and subtyping.
Formally optimal boxing
 In POPL '94 [37
"... An important implementation decision in polymorphically typed functional programming languages is whether to represent data in boxed or unboxed form and when to transform them from one representation to the other. Using a language with explicit representation types and boxing/unboxing operations w ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
An important implementation decision in polymorphically typed functional programming languages is whether to represent data in boxed or unboxed form and when to transform them from one representation to the other. Using a language with explicit representation types and boxing/unboxing operations we axiomatize equationally the set of all explicitly boxed versions, called completions, of a given source program. In a twostage process we give some of the equations a rewriting interpretation that captures eliminating boxing/unboxing operations without relying on a specific implementation or even semantics of the underlying language. The resulting reduction systems operate on congruence classes of completions defined by the remaining equations E, which can be understood as moving boxing/unboxing operations along data flow paths in the source program. We call a completion eopt formally optimal if every other completion for the same program (and at the same representation type) reduces to eopt under this twostage reduction. We show that every source program has formally optimal completions, which are unique modulo E. This is accomplished by first “polarizing ” the equations in E and orienting them to obtain two canonical (confluent and strongly normalizing) rewriting systems. The completions produced by Leroy’s and Poulsen’s algorithms are generally not formally optimal in our sense. The rewriting systems have been implemented and applied to some simple Standard ML programs. Our results show that the amount of boxing and unboxing operations is also in practice substantially reduced in comparison to Leroy’s completions. This analysis is intended to be integrated into Tofte’s regionbased implementation of Standard ML currently underway at DIKU.
EtaExpansion does the Trick
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1996
"... Partialevaluation folklore has it that massaging one's source programs can make them specialize better. In Jones, Gomard, and Sestoft's recent textbook, a whole chapter is dedicated to listing such "bindingtime improvements": nonstandard use of continuationpassing style, etaex ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Partialevaluation folklore has it that massaging one's source programs can make them specialize better. In Jones, Gomard, and Sestoft's recent textbook, a whole chapter is dedicated to listing such "bindingtime improvements": nonstandard use of continuationpassing style, etaexpansion, and a popular transformation called "The Trick". We provide a unified view of these bindingtime improvements, from a typing perspective. Just as a
Equality proofs and deferred type errors: a compiler pearl
 In ICFP
, 2012
"... The Glasgow Haskell Compiler is an optimizing compiler that expresses and manipulates firstclass equality proofs in its intermediate language. We describe a simple, elegant technique that exploits these equality proofs to support deferred type errors. The technique requires us to treat equality pro ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The Glasgow Haskell Compiler is an optimizing compiler that expresses and manipulates firstclass equality proofs in its intermediate language. We describe a simple, elegant technique that exploits these equality proofs to support deferred type errors. The technique requires us to treat equality proofs as possiblydivergent terms; we show how to do so without losing either soundness or the zerooverhead cost model that the programmer expects.
Modelyze: a Gradually Typed Host Language for Embedding EquationBased Modeling Languages
, 2012
"... ..."
Pragmatic Aspects of TypeDirected Partial Evaluation
, 1996
"... Typedirected partial evaluation stems from the residualization of static values in dynamic contexts, given their type and the type of their free variables. Its algorithm coincides with the algorithm for coercing a subtype value into a supertype value, which itself coincides with Berger and Schw ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Typedirected partial evaluation stems from the residualization of static values in dynamic contexts, given their type and the type of their free variables. Its algorithm coincides with the algorithm for coercing a subtype value into a supertype value, which itself coincides with Berger and Schwichtenberg's normalization algorithm for the simply typed calculus. Typedirected partial evaluation thus can be used to specialize a compiled, closed program, given its type.
A Calculus for Boxing Analysis of Polymorphically Typed Languages
, 1996
"... An important decision when implementing languages with polymorphic types, such as Standard ML or Haskell, is whether to represent data in boxed or unboxed form and when to transform them from one representation to the other. Using a language with explicit representation types and boxing/unboxing ope ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An important decision when implementing languages with polymorphic types, such as Standard ML or Haskell, is whether to represent data in boxed or unboxed form and when to transform them from one representation to the other. Using a language with explicit representation types and boxing/unboxing operations we axiomatize equationally the set of all explicitly boxed versions, called completions , of a given source program. In a twostage process we give some of the equations a rewriting interpretation that captures eliminating boxing/unboxing operations without relying on a specific implementation or even the semantics of the underlying language. The resulting reduction systems operate on equivalence classes of completions defined by the remaining equations E, which can be understood as moving boxing/unboxing operations along data flow paths in the source program. We call a completion e opt formally optimal if every other completion for the same program (and at the same representation ty...
unknown title
"... We present a strikingly simple partial evaluator, that istypedirected and rei es a compiled program into thetext of a residual, specialized program. Our partial evaluator is concise (a few lines) and it handles the agship examples of oline monovariant partial evaluation. Its source programs are con ..."
Abstract
 Add to MetaCart
We present a strikingly simple partial evaluator, that istypedirected and rei es a compiled program into thetext of a residual, specialized program. Our partial evaluator is concise (a few lines) and it handles the agship examples of oline monovariant partial evaluation. Its source programs are constrained in two ways: they must be closed and monomorphically typable. Thus dynamic free variables need to be factored out in a \dynamic initial environment&quot;. Typedirected partial evaluation uses no symbolic evaluation for specialization, and naturally processes static computational e ects. Our partial evaluator is the part of an o ine partial evaluator that residualizes static values in dynamic contexts. Its restriction to the simply typed lambdacalculus coincides with Berger and Schwichtenberg's \inverse of the evaluation functional &quot; (LICS'91), which is an instance of normalization in a logical setting. As such, typedirected partial evaluation essentially achieves lambdacalculus normalization. We extend itto produce specialized programs that are recursive and that use disjoint sums and computational e ects. We also analyze its limitations: foremost, it does not handle inductive types. This paper therefore bridges partial evaluation andcalculus normalization through higherorder abstract syntax, and touches upon parametricity, proof theory, andtype theory (including subtyping and coercions), compiler optimization, and runtime code generation (including decompilation). It also o ers a simple solution to denotational semanticsbased compilation and compiler generation. 1
Coinductive Axiomatization of Recursive Type Equality and Subtyping ⋆
"... Abstract. We present new sound and complete axiomatizations of type equality and subtype inequality for a firstorder type language with regular recursive types. The rules are motivated by coinductive characterizations of type containment and type equality via simulation and bisimulation, respective ..."
Abstract
 Add to MetaCart
Abstract. We present new sound and complete axiomatizations of type equality and subtype inequality for a firstorder type language with regular recursive types. The rules are motivated by coinductive characterizations of type containment and type equality via simulation and bisimulation, respectively. The main novelty of the axiomatization is the fixpoint rule (or coinduction principle), which has the form A, P ⊢ P A ⊢ P where P is either a type equality τ = τ ′ or type containment τ ≤ τ ′. We define what it means for a proof (formal derivation) to be formally contractive and show that the fixpoint rule is sound if the proof of the premise A, P ⊢ P is contractive. (A proof of A,P ⊢ P using the assumption axiom is, of course, not contractive.) The fixpoint rule thus allows us to capture a coinductive relation in the fundamentally inductive framework of inference systems. The new axiomatizations are “leaner ” than previous axiomatizations, particularly so for type containment since no separate axiomatization of type equality is required, as in Amadio and Cardelli’s axiomatization. They give rise to a natural operational interpretation of proofs as coercions. In particular, the fixpoint rule corresponds to definition by recursion. Finally, the axiomatization is closely related to (known) efficient algorithms for deciding type equality and type containment. These can be modified to not only decide type equality and type containment, but also construct proofs in our axiomatizations efficiently. In connection with the operational interpretation of proofs as coercions this gives efficient (O(n 2) time) algorithms for constructing efficient coercions from a type to any of its supertypes or isomorphic types. 1