Results 1 
4 of
4
On the Safety of Nöcker’s Strictness Analysis
 FRANKFURT AM MAIN, GERMANY
"... Abstract. This paper proves correctness of Nöcker’s method of strictness analysis, implemented for Clean, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctne ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Abstract. This paper proves correctness of Nöcker’s method of strictness analysis, implemented for Clean, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nöcker’s strictness analysis. We reformulate Nöcker’s strictness analysis algorithm in a higherorder lambdacalculus with case, constructors, letrec, and a nondeterministic choice operator ⊕ used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a smallstep semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams.
Correctness of copy in calculi with letrec, case and constructors
, 2007
"... Callbyneed lambda calculi with letrec provide a rewritingbased operational semantics for (lazy) callbyname functional languages. These calculi model the sharing behavior during evaluation more closely than letbased calculi that use a fixpoint combinator. In a previous paper we showed that the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Callbyneed lambda calculi with letrec provide a rewritingbased operational semantics for (lazy) callbyname functional languages. These calculi model the sharing behavior during evaluation more closely than letbased calculi that use a fixpoint combinator. In a previous paper we showed that the copytransformation is correct for the small calculus LRλ. In this paper we demonstrate that the proof method based on a calculus on infinite trees for showing correctness of instantiation operations can be extended to the calculus LRCCλ with case and constructors, and show that copying at compiletime can be done without restrictions. We also show that the callbyneed and callbyname strategies are equivalent w.r.t. contextual equivalence. A consequence is correctness of all the transformations like instantiation, inlining, specialization and common subexpression elimination in LRCCλ. We are confident that the method scales up for proving correctness of copyrelated transformations in nondeterministic lambda calculi if restricted to “deterministic” subterms.
Böhm's Theorem for Berarducci Trees
, 2000
"... We propose an extension of lambda calculus which internally discriminates two lambda terms if and only if they have dierent Berarducci trees. 1 Introduction The Lambda Calculus is a theory of functions that serves as a foundation for the functional programming paradigm. Lambda terms in this view ar ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose an extension of lambda calculus which internally discriminates two lambda terms if and only if they have dierent Berarducci trees. 1 Introduction The Lambda Calculus is a theory of functions that serves as a foundation for the functional programming paradigm. Lambda terms in this view are idealized programs. There are essentially two ways of characterizing the meaning of lambda terms. The rst one is to run the program and to study the output. The second one is to observe the eect of the program when used as a subprogram in other programs. With respect to the rst approach, traditionally the output of a lambda term was described by its Bohm tree. But also LevyLongo trees and more recently Berarducci trees have been used. In this paper we will focus on Berarducci trees. These trees provide the possible output of a lambda term in greatest detail. The idea behind all these dierent concepts of tree is stable information, that we can recover by reducing the terms. This is ...
Intersection Types, λmodels, and Böhm Trees
"... This paper is an introduction to intersection type disciplines, with the aim of illustrating their theoretical relevance in the foundations of λcalculus. We start by describing the wellknown results showing the deep connection between intersection type systems and normalization properties, i.e. ..."
Abstract
 Add to MetaCart
This paper is an introduction to intersection type disciplines, with the aim of illustrating their theoretical relevance in the foundations of λcalculus. We start by describing the wellknown results showing the deep connection between intersection type systems and normalization properties, i.e., their power of naturally characterizing solvable, normalizing, and strongly normalizing pure λterms. We then explain the importance of intersection types for the semantics of λcalculus, through the construction of filter models and the representation of algebraic lattices. We end with an original result that shows how intersection types also allow to naturally characterize tree representations of unfoldings of λterms (Böhm trees).