Results

**1 - 7**of**7**### FROM SELF-INTERPRETERS TO NORMALIZATION BY EVALUATION MATHIEU BOESPFLUG

"... Abstract. We characterize normalization by evaluation as the composition of a self-interpreter with a self-reducer using a special representation scheme, in the sense of Mogensen (1992). We do so by deriving in a systematic way an untyped normalization by evaluation algorithm from a standard interpr ..."

Abstract
- Add to MetaCart

Abstract. We characterize normalization by evaluation as the composition of a self-interpreter with a self-reducer using a special representation scheme, in the sense of Mogensen (1992). We do so by deriving in a systematic way an untyped normalization by evaluation algorithm from a standard interpreter for the λ-calculus. The derived algorithm is not novel and indeed other published algorithms may be obtained in the same manner through appropriate adaptations to the representation scheme. 1. Self-interpreters and self-reducers What is a self-interpreter? For the untyped λ-calculus, Mogensen (1992) offers the following definition, given an injective mapping x y (the representation scheme) that yields representations of arbitrary terms: E xMy β M. That is, a self-interpreter is a term E of the λ-calculus such that when applied to the representation xMy of any term M, the result is a convertible term (modulo renaming). The x y mapping 1 cannot of course be defined within the λ-calculus itself, but we posit its existence as a primitive operation of the calculus. The representation of a term is a piece of data, something that can be manipulated, transformed and inspected within the calculus itself. It is natural to represent data as terms in normal form, so that data may be regarded as constant with regard to term reduction. Consider the following grammar for terms and normal terms: Var Q x, y, z Term Q t � x | λx.t | t t Term TermNF Q tn � ta | λx.tn Term TermA Q ta � x | ta tn The representation scheme can be typed as x y: Term Ñ TermNF. All manner of representation schema are possible, but Mogensen commits to a particularly simple representation scheme, one that enables him to implement a trivially simple self-interpreter that not only yields convertible terms from their representations, but in fact whose weak head normal form when applied to a normal term M is identical to M, up to renaming of variables. Let us call this particular self-interpreter Eα. We have that Eα xMy ÝÑwhnf M. Mogensen goes on to define a self-reducer as a transformation on representations: R xMy β xNFM y, where NFM stands for the normal form of M, if one exists. Equipped with such a contraption, we can define a special kind of self-interpreter with the additional property that all representations of terms evaluate to normal forms. For all M,

### Conversion by Evaluation Mathieu Boespflug ⋆

"... Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying), the reuse of native datatypes and pattern matching facilities of the underlying evaluator, we may obtain a normalizer with little to no performance overhead compared to a regular evaluator. 1

### Efficient Normalization by Evaluation Mathieu Boespflug

"... Dependently typed theorem provers allow arbitrary terms in types. It is convenient to identify large classes of terms during type checking, hence many such systems provision some form of conversion rule. A standard algorithm for testing the convertibility of two types consists in normalizing them, t ..."

Abstract
- Add to MetaCart

Dependently typed theorem provers allow arbitrary terms in types. It is convenient to identify large classes of terms during type checking, hence many such systems provision some form of conversion rule. A standard algorithm for testing the convertibility of two types consists in normalizing them, then testing for syntactic equality of the normal forms. Normalization by evaluation is a standard technique enabling the use of existing compilers and runtimes for functional languages to implement normalizers, without peaking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying) and the reuse of pattern matching facilities of the evaluator for datatypes, we may obtain a normalizer that evaluates non-functional values about as fast as the underlying evaluator, but as an added benefit can also fully normalize functional values — or to put it another way, partially evaluates functions efficiently. 1.

### Author manuscript, published in "Twelfth International Symposium on Practical Aspects of Declarative Languages (2010)" Conversion by Evaluation Mathieu Boespflug ⋆

, 2009

"... Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying), the reuse of native datatypes and pattern matching facilities of the underlying evaluator, we may obtain a normalizer with little to no performance overhead compared to a regular evaluator. 1

### FROM SELF-INTERPRETERS TO NORMALIZATION BY EVALUATION MATHIEU BOESPFLUG

"... Abstract. We characterize normalization by evaluation as the composition of a self-interpreter with a self-reducer using a special representation scheme, in the sense of Mogensen (1992). We do so by deriving in a systematic way an untyped normalization by evaluation algorithm from a standard interpr ..."

Abstract
- Add to MetaCart

Abstract. We characterize normalization by evaluation as the composition of a self-interpreter with a self-reducer using a special representation scheme, in the sense of Mogensen (1992). We do so by deriving in a systematic way an untyped normalization by evaluation algorithm from a standard interpreter for theλ-calculus. The derived algorithm is not novel and indeed other published algorithms may be obtained in the same manner through appropriate adaptations to the representation scheme. 1. Self-interpretersandself-reducers What is a self-interpreter? For the untypedλ-calculus, Mogensen (1992) offers the following definition, given an injective mapping (the representation scheme) that yields representations of arbitrary terms: E M�β M. That is, a self-interpreter is a term E of theλ-calculus such that when applied to the representation M of any term M, the result is a convertible term (modulo renaming). The mapping 1 cannot of course be defined within theλ-calculus itself, but we posit its existence as a primitive operation of the calculus. The representation of a term is a piece

### FROM SELF-INTERPRETERS TO NORMALIZATION BY EVALUATION

, 2009

"... HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."

Abstract
- Add to MetaCart

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.