Results 11  20
of
61
Safety Analysis versus Type Inference
 INFORMATION AND COMPUTATION
, 1995
"... Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different pe ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different perspectives, however. Safety analysis is global in that it can only analyze a complete program. In contrast, type inference is local in that it can analyze pieces of a program in isolation. In this paper we prove that safety analysis is sound , relative to both a strict and a lazy operational semantics. We also prove that safety analysis accepts strictly more safe lambda terms than does type inference for simple types. The latter result demonstrates that global program analyses can be more precise than local ones.
Dynamic Typing
 In Proc. Fourth European Symp. Programming (ESOP’92
, 1992
"... We present an extension of a statically typed language with a special type Dynamic and explicit type tagging and checking operations (coercions). Programs in runtime typed languages are viewed as incomplete programs that are to be completed to welltyped programs by explicitly inserting coercions i ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
We present an extension of a statically typed language with a special type Dynamic and explicit type tagging and checking operations (coercions). Programs in runtime typed languages are viewed as incomplete programs that are to be completed to welltyped programs by explicitly inserting coercions into them. Such completions are generally not unique. If the meaning of an incomplete program is to be the meaning of any of its completions and if it is too be unambiguous it is necessary that all its completions are coherent (semantically equivalent). We characterize with an equational theory the properties a semantics must satisfy to be coherent. Since “naive ” coercion evaluation does not satisfy all of the coherence equations we exclude certain “unsafe ” completions from consideration that can cause avoidable type errors at runtime. Various classes of completions may be used, parameterized by whether or not coercions may only occur at data creation and data use points in a program and whether only primitive coercions or also induced coercions. For each of these classes any term has a minimal completion that is optimal in the sense that it contains no coercions that could be avoided by a another coercion in the same class. In particular, minimal completions contain no coercions at all whenever the program is statically typable. If only primitive type operations are admitted we show that minimal completions can be computed in almostlinear time. If induced coercions are also allowed the minimal completion can be computed in time O(nm) where n is the size of the program and m is the size of the value flow graph of the program, which may be of size O(n 2), but is typically rather sparse. Finally, we sketch how this explicit dynamic typing discipline can be extended to letpolymorphism by parameterization with respect to coercions. The resulting language framework leads to a seamless integration of statically typed and dynamically typed languages by relying on type inference for programs that have no type information and no explicit coercions whatsoever. 1
Set Constraints and SetBased Analysis
 In Proceedings of the Workshop on Principles and Practice of Constraint Programming, LNCS 874
, 1994
"... This paper contains two main parts. The first examines the set constraint calculus, discusses its history, and overviews the current state of known algorithms and related issues. Here we will also survey the uses of set constraints, starting from early work in (imperative) program analysis, to more ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
This paper contains two main parts. The first examines the set constraint calculus, discusses its history, and overviews the current state of known algorithms and related issues. Here we will also survey the uses of set constraints, starting from early work in (imperative) program analysis, to more recent work in logic and functional programming systems. The second part describes setbased analysis. The aim here is a declarative interpretation of what it means to approximate the meaning of a program in just one way: ignore dependencies between variables, and instead, reason about each variable as the set of its possible runtime values. The basic approach starts with some description of the operational semantics, and then systematically replaces descriptions of environments (mappings from program variables to values) by set environments (mappings from program variables to sets
Safety Analysis versus Type Inference for Partial Types
 Information Processing Letters
, 1992
"... Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. We prove that safety analysis accepts strictly more safe lambda terms than does type inference for Thatte's partial types. 1 Introduc ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. We prove that safety analysis accepts strictly more safe lambda terms than does type inference for Thatte's partial types. 1 Introduction We will compare two techniques for analyzing the safety of terms in an untyped lambda calculus with constants, see figure 1. The safety we are concerned with is the absence of those runtime errors that arise from the misuse of constants. In this paper we consider just the two constants 0 and succ. They can be misused either by applying a number to an argument, or by applying succ to an abstraction. Safety is undecidable so any analysis algorithm must reject some safe programs. E ::= x j x:E j E 1 E 2 j 0 j succ E Figure 1: The lambda calculus. One way of achieving a safety guarantee is to perform type inference (TI), because "welltyped programs cannot go wrong". Two examples of type ...
EtaExpansion does the Trick
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1996
"... Partialevaluation folklore has it that massaging one's source programs can make them specialize better. In Jones, Gomard, and Sestoft's recent textbook, a whole chapter is dedicated to listing such "bindingtime improvements": nonstandard use of continuationpassing style, etaexpansion, and a popul ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Partialevaluation folklore has it that massaging one's source programs can make them specialize better. In Jones, Gomard, and Sestoft's recent textbook, a whole chapter is dedicated to listing such "bindingtime improvements": nonstandard use of continuationpassing style, etaexpansion, and a popular transformation called "The Trick". We provide a unified view of these bindingtime improvements, from a typing perspective. Just as a
Constraint Systems for Useless Variable Elimination
, 1998
"... A useless variable is one whose value contributes nothing to the final outcome of a computation. Such variables are unlikely to occur in humanproduced code, but may be introduced by various program transformations. We would like to eliminate useless parameters from procedures and eliminate the corr ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A useless variable is one whose value contributes nothing to the final outcome of a computation. Such variables are unlikely to occur in humanproduced code, but may be introduced by various program transformations. We would like to eliminate useless parameters from procedures and eliminate the corresponding actual parameters from their call sites. This transformation is the extension to higherorder programming of a variety of deadcode elimination optimizations that are important in compilers for firstorder imperative languages. Shivers has presented such a transformation. We reformulate the transformation and prove its correctness. We believe that this correctness proof can be a model for proofs of other analysisbased transformations. We proceed as follows: ffl We reformulate Shivers' analysis as a set of constraints; since the constraints are conditional inclusions, they can be solved using standard algorithms. ffl We prove that any solution to the constraints is sound: that tw...
Correctness of Monadic State: An Imperative CallbyNeed Calculus
 In Proc. 25th ACM Symposium on Principles of Programming Languages
, 1998
"... The extension of Haskell with a builtin state monad combines mathematical elegance with operational efficiency: ffl Semantically, at the source language level, constructs that act on the state are viewed as functions that pass an explicit store data structure around. ffl Operationally, at the imp ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
The extension of Haskell with a builtin state monad combines mathematical elegance with operational efficiency: ffl Semantically, at the source language level, constructs that act on the state are viewed as functions that pass an explicit store data structure around. ffl Operationally, at the implementation level, constructs that act on the state are viewed as statements whose evaluation has the sideeffect of updating the implicit global store in place. There are several unproven conjectures that the two views are consistent. Recently, we have noted that the consistency of the two views is far from obvious: all it takes for the implementation to become unsound is one judiciouslyplaced betastep in the optimization phase of the compiler. This discovery motivates the current paper in which we formalize and show the correctness of the implementation of monadic state. For the proof, we first design a typed callbyneed language that models the intermediate language of the compiler, to...
Orderofevaluation Analysis for Destructive Updates in Strict Functional Languages with Flat Aggregates
 In Conference on Functional Programming Languages and Computer Architecture
, 1993
"... The aggregate update problem in functional languages is concerned with detecting cases where a functional array update operation can be implemented destructively in constant time. Previous work on this problem has assumed a fixed order of evaluation of expressions. In this paper, we devise a simple ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
The aggregate update problem in functional languages is concerned with detecting cases where a functional array update operation can be implemented destructively in constant time. Previous work on this problem has assumed a fixed order of evaluation of expressions. In this paper, we devise a simple analysis, for strict functional languages with flat aggregates, that derives a good order of evaluation for making the updates destructive. Our work improves Hudak's work [14] on abstract reference counting, which assumes fixed order of evaluation and uses the domain of sticky reference counts. Our abstract reference counting uses a 2point domain. We show that for programs with no aliasing, our analysis is provably more precise than Hudak's approach (even if the fixed order of evaluation chosen by Hudak happens to be the right order). We also show that our analysis algorithm runs in polynomial time. To the best of our knowledge, no previous work shows polynomial time complexity. We suggest ...
Correctness of Bindingtime Analysis
, 1993
"... A bindingtime analysis is correct if it always produces consistent bindingtime information. Consistency prevents partial evaluators from "going wrong". A sufficient and decidable condition for consistency, called wellannotatedness, was first presented by Gomard and Jones. In this paper we prove t ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
A bindingtime analysis is correct if it always produces consistent bindingtime information. Consistency prevents partial evaluators from "going wrong". A sufficient and decidable condition for consistency, called wellannotatedness, was first presented by Gomard and Jones. In this paper we prove that a weaker condition implies consistency. Our condition is decidable, subsumes the one of Gomard and Jones, and was first studied by Schwartzbach and the present author. Our result implies the correctness of the bindingtime analysis of Mogensen, and it indicates the correctness of the core of the bindingtime analyses of Bondorf and Consel. We also prove that all partial evaluators will on termination have eliminated all "eliminable"marked parts of an input which satisfies our condition. This generalizes a result of Gomard. Our development is for the pure calculus with explicit bindingtime annotations. 1 Introduction A partial evaluator is an implementation of Kleene's S m n theorem....
A New Approach to Control Flow Analysis
 Lecture
, 1998
"... We develop a control flow analysis algorithm for PCF based on game semantics. The analysis is closely related to Shivers' 0CFA analysis and the algorithm is shown to be cubic. The game semantics basis for the algorithm means that it can be naturally extended to handle strict languages and languages ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We develop a control flow analysis algorithm for PCF based on game semantics. The analysis is closely related to Shivers' 0CFA analysis and the algorithm is shown to be cubic. The game semantics basis for the algorithm means that it can be naturally extended to handle strict languages and languages with imperative features. These extensions are discussed in the paper. We sketch the correctness proof for the algorithm. We also illustrate an algorithm for computing klimited CFA.