Results 1  10
of
40
Typedirected partial evaluation
 Proceedings of the TwentyThird Annual ACM Symposium on Principles of Programming Languages
, 1996
"... Abstract. Typedirected partial evaluation stems from the residualization of arbitrary static values in dynamic contexts, given their type. Its algorithm coincides with the one for coercing asubtype value into a supertype value, which itself coincides with the one of normalization in thecalculus. T ..."
Abstract

Cited by 207 (39 self)
 Add to MetaCart
Abstract. Typedirected partial evaluation stems from the residualization of arbitrary static values in dynamic contexts, given their type. Its algorithm coincides with the one for coercing asubtype value into a supertype value, which itself coincides with the one of normalization in thecalculus. Typedirected partial evaluation is thus used to specialize compiled, closed programs, given their type. Since Similix, letinsertion is a cornerstone of partial evaluators for callbyvalue procedural programs with computational e ects. It prevents the duplication of residual computations, and more generally maintains the order of dynamic side e ects in residual programs. This article describes the extension of typedirected partial evaluation to insert residual let expressions. This extension requires the userto annotate arrowtypes with e ect information. It is achieved by delimiting and abstracting control, comparably to continuationbased specialization in direct style. It enables typedirected partial evaluation of e ectful programs (e.g.,ade nitional lambdainterpreter for an imperative language) that are in direct style. The residual programs are in Anormal form. 1
Intuitionistic Model Constructions and Normalization Proofs
, 1998
"... We investigate semantical normalization proofs for typed combinatory logic and weak calculus. One builds a model and a function `quote' which inverts the interpretation function. A normalization function is then obtained by composing quote with the interpretation function. Our models are just like ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
We investigate semantical normalization proofs for typed combinatory logic and weak calculus. One builds a model and a function `quote' which inverts the interpretation function. A normalization function is then obtained by composing quote with the interpretation function. Our models are just like the intended model, except that the function space includes a syntactic component as well as a semantic one. We call this a `glued' model because of its similarity with the glueing construction in category theory. Other basic type constructors are interpreted as in the intended model. In this way we can also treat inductively defined types such as natural numbers and Brouwer ordinals. We also discuss how to formalize terms, and show how one model construction can be used to yield normalization proofs for two different typed calculi  one with explicit and one with implicit substitution. The proofs are formalized using MartinLof's type theory as a meta language and mechanized using the A...
Inductive families need not store their indices
 Types for Proofs and Programs, Torino, 2003, volume 3085 of LNCS
, 2004
"... Abstract. We consider the problem of efficient representation of dependently typed data. In particular, we consider a language TT based on Dybjer's notion of inductive families [11] and reanalyse their general form with a view to optimising the storage associated with their use. We introduce an exec ..."
Abstract

Cited by 37 (13 self)
 Add to MetaCart
Abstract. We consider the problem of efficient representation of dependently typed data. In particular, we consider a language TT based on Dybjer's notion of inductive families [11] and reanalyse their general form with a view to optimising the storage associated with their use. We introduce an execution language, ExTT, which allows the commenting out of computationally irrelevant subterms and show how to use properties of elimination rules to elide constructor arguments and tags in ExTT. We further show how some types can be collapsed entirely at runtime. Several examples are given, including a representation of the simply typed *calculus for which our analysis yields an 80 % reduction in runtime storage requirements. 1 Introduction Dependent type theory provides programmers with more than an integrated logic for reasoning about program correctness. It allows more precise types for programs and data in the first place, strengthening the typechecker's language of guarantees. We have richer function types 8x: S: T which adapt their return types to each argument; we also have richer data structures which do not just contain but explain data, exposing and enforcing their properties. Moreover, we may reasonably expect more static detail about programs and data to yield better optimised dynamic behaviour. We need neither test what is guaranteed nor store what is determined by typechecking. Pollack's implicit syntax [24] already supports the omission of much redundant information from concrete syntax for similar reasons.
An Idealized MetaML: Simpler, and More Expressive
, 1999
"... MetaML is a multistage functional programming language featuring three constructs that can be viewed as staticallytyped refinements of the backquote, comma, and eval of Scheme. Thus it provides special support for writing code generators and serves as a semantically sound basis for systems involv ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
MetaML is a multistage functional programming language featuring three constructs that can be viewed as staticallytyped refinements of the backquote, comma, and eval of Scheme. Thus it provides special support for writing code generators and serves as a semantically sound basis for systems involving multiple interdependent computational stages. In previous work, we reported on an implementation of MetaML, and on a smallstep semantics and typesystem for MetaML. In this paper, we present An Idealized MetaML (AIM) that is the result of our study of a categorical model for MetaML. An important outstanding problem is finding a type system that provides the user with a means for manipulating both open and closed code. This problem has eluded efforts by us and other researchers for over three years. AIM solves the issue by providing two type constructors, one classifies closed code and the other open code, and describing how they interact.
Constructions, Inductive Types and Strong Normalization
, 1993
"... This thesis contains an investigation of Coquand's Calculus of Constructions, a basic impredicative Type Theory. We review syntactic properties of the calculus, in particular decidability of equality and typechecking, based on the equalityasjudgement presentation. We present a settheoretic notio ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
This thesis contains an investigation of Coquand's Calculus of Constructions, a basic impredicative Type Theory. We review syntactic properties of the calculus, in particular decidability of equality and typechecking, based on the equalityasjudgement presentation. We present a settheoretic notion of model, CCstructures, and use this to give a new strong normalization proof based on a modification of the realizability interpretation. An extension of the core calculus by inductive types is investigated and we show, using the example of infinite trees, how the realizability semantics and the strong normalization argument can be extended to nonalgebraic inductive types. We emphasize that our interpretation is sound for large eliminations, e.g. allows the definition of sets by recursion. Finally we apply the extended calculus to a nontrivial problem: the formalization of the strong normalization argument for Girard's System F. This formal proof has been developed and checked using the...
From semantics to rules: A machine assisted analysis
 Proceedings of CSL '93, LNCS 832
, 1999
"... this paper is similar to the one in [2]. In this paper they define a normalization function for simply typed ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
this paper is similar to the one in [2]. In this paper they define a normalization function for simply typed
A Semantic Account of TypeDirected Partial Evaluation
 In Gopalan Nadathur, editor, International Conference on Principles and Practice of Declarative Programming, number 1702 in Lecture
, 1999
"... We formally characterize partial evaluation of functional programs as a normalization problem in an equational theory, and derive a typebased normalizationbyevaluation algorithm for computing normal forms in this setting. We then establish the correctness of this algorithm using a semantic ar ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We formally characterize partial evaluation of functional programs as a normalization problem in an equational theory, and derive a typebased normalizationbyevaluation algorithm for computing normal forms in this setting. We then establish the correctness of this algorithm using a semantic argument based on Kripke logical relations. For simplicity, the results are stated for a nonstrict, purely functional language; but the methods are directly applicable to stating and proving correctness of typedirected partial evaluation in MLlike languages as well.
Categorical Reconstruction of a Reduction Free Normalization Proof
, 1995
"... Introduction We present a categorical proof of the normalization theorem for simply typed calculus, i.e. we derive a computable function nf which assigns to every typed term a normal form, s.t. M ' N nf(M ) = nf(N ) nf(M ) ' M where ' is fij equality. Both the function nf and its correctness ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Introduction We present a categorical proof of the normalization theorem for simply typed calculus, i.e. we derive a computable function nf which assigns to every typed term a normal form, s.t. M ' N nf(M ) = nf(N ) nf(M ) ' M where ' is fij equality. Both the function nf and its correctness properties can be deduced from the categorical construction. To substantiate this, we present an ML program in the appendix which can be extracted from our argument. We emphasize that this presentation of normalization is reduction free, i.e. we do not mention term rewriting or use properties of term rewriting systems such as the ChurchRosser property. An immediate consequence of normalization is the decidability of ' but there are other useful corollaries; for instance we can show that
Normalization and the Yoneda Embedding
"... this paper we describe a new, categorical approach to normalization in typed  ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
this paper we describe a new, categorical approach to normalization in typed 
EtaExpansion does the Trick
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1996
"... Partialevaluation folklore has it that massaging one's source programs can make them specialize better. In Jones, Gomard, and Sestoft's recent textbook, a whole chapter is dedicated to listing such "bindingtime improvements": nonstandard use of continuationpassing style, etaexpansion, and a popul ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Partialevaluation folklore has it that massaging one's source programs can make them specialize better. In Jones, Gomard, and Sestoft's recent textbook, a whole chapter is dedicated to listing such "bindingtime improvements": nonstandard use of continuationpassing style, etaexpansion, and a popular transformation called "The Trick". We provide a unified view of these bindingtime improvements, from a typing perspective. Just as a