Results 11  20
of
45
Bindingtime Analysis: Abstract Interpretation versus Type Inference
 IN PROC. ICCL'94, FIFTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER LANGUAGES
, 1994
"... Bindingtime analysis is important in partial evaluators. Its task is to determine which parts of a program can be evaluated if some of the expected input is known. Two approaches to do this are abstract interpretation and type inference. We compare two specific such analyses to see which one deter ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Bindingtime analysis is important in partial evaluators. Its task is to determine which parts of a program can be evaluated if some of the expected input is known. Two approaches to do this are abstract interpretation and type inference. We compare two specific such analyses to see which one determines most program parts to be eliminable. The first is a an abstract interpretation approach based on closure analysis and the second is the type inference approach of Gomard and Jones. Both apply to the pure calculus. We prove that the abstract interpretation approach is more powerful than that of Gomard and Jones: the former determines the same and possibly more program parts to be eliminable as the latter.
Strictness and Totality Analysis with Conjunction
 In TAPSOFT'95, LNCS 915
, 1995
"... We extend the strictness and totality analysis of [12] by allowing conjunction at all levels rather than at the toplevel. We prove the strictness and totality analysis correct with respect to a denotational semantics and finally construct an algorithm for inferring the strictness and totality prope ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We extend the strictness and totality analysis of [12] by allowing conjunction at all levels rather than at the toplevel. We prove the strictness and totality analysis correct with respect to a denotational semantics and finally construct an algorithm for inferring the strictness and totality properties. 1 Introduction Strictness analysis has proved useful in the implementation of lazy functional languages like Miranda, Lazy ML and Haskell: when a function is strict it is safe to evaluate its argument before performing the function call. Totality analysis has not been adopted so widely: if the argument to a function is known to terminate then it is safe to evaluate it before performing the function call [9]. In the literature there are several approaches to the specification of strictness analysis: abstract interpretation (e.g. [10, 3]), projection analysis (e.g. [14]) and inference based methods (e.g. [2, 6, 7, 8, 15]). Totality analysis has received much less attention and has pri...
A Logical Framework for Program Analysis
 Proceedings of the 1992 Glasgow Functional Programming Workshop
, 1992
"... Using logics to express program properties, and deduction systems for proving properties of programs, gives a very elegant way of defining program analysis techniques. This paper addresses a shortcoming of previous work in the area by establishing a more general framework for such logics, as is comm ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Using logics to express program properties, and deduction systems for proving properties of programs, gives a very elegant way of defining program analysis techniques. This paper addresses a shortcoming of previous work in the area by establishing a more general framework for such logics, as is commonly done for progam analysis using abstract interpretation. Moreover, there are natural extensions of this work which deal with polymorphic languages. 1 Introduction Kuo and Mishra gave a `type' deduction system for proving strictness properties of programs, and gave a type inference (sometimes called type reconstruction) algorithm for determining these strictness types [10]. The algorithm was proved correct by showing that the types deduced by it were true in an operational model of the language. They observed that their algorithm was not as powerful as one based on the strictness abstract interpretation of [4], and it appeared to be because their type system lacked intersection types. Bo...
A Theory of Program Refinement
, 1998
"... We give a canonical program refinement calculus based on the lambda calculus and classical firstorder predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We give a canonical program refinement calculus based on the lambda calculus and classical firstorder predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is that it should be possible to induce a refinement calculus in a generic manner from a programming language and a program logic. For concreteness, we adopt the simplytyped lambda calculus augmented with primitive recursion as a paradigmatic typed functional programming language, and use classical firstorder logic as a simple program logic. A key feature is the construction of the refinement calculus in a modular fashion, as the combination of two orthogonal extensions to the underlying programming language (in this case, the simplytyped lambda calculus). The crucial observation is that a refinement calculus is given by extending a programming language to allow indeterminate expressions (or ‘stubs’) involving the construction ‘some program x such that P ’. Factoring this into ‘some x...’
A Syntactic Approach to Fixed Point Computation on Finite Domains
 In Proc. 1992 ACM Symposium on Lisp and Functional Programming
, 1992
"... We propose a syntactic approach to performing fixed point computation on finite domains. Finding fixed points in finite domains for monotonic functions is an essential task when calculating abstract semantics of functional programs. Previous methods for fixed point finding have been mainly based on ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We propose a syntactic approach to performing fixed point computation on finite domains. Finding fixed points in finite domains for monotonic functions is an essential task when calculating abstract semantics of functional programs. Previous methods for fixed point finding have been mainly based on semantic approaches which may be very inefficient even for simple programs. We outline the development of a syntactic approach, and show that the syntactic approach is sound and complete with respect to semantics. A few examples are provided to illustrate this syntactic approach. 1 Motivation and Introduction Finding fixed points for monotonic functions over finite domains is an important task in abstract interpretation. In abstract interpretation, a standard (or nonstandard) semantics of a functional program is abstracted to a monotonic function over finite domains, and, if the program contains recursive definitions, fixed point finding is used to calculate the abstract semantics of th...
Lazy Type Inference for the Strictness Analysis of Lists
, 1994
"... We present a type inference system for the strictness analysis of lists and we show that it can be used as the basis for an efficient algorithm. The algorithm is as accurate as the usual abstract interpretation technique. One distinctive advantage of this approach is that it is not necessary to impo ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We present a type inference system for the strictness analysis of lists and we show that it can be used as the basis for an efficient algorithm. The algorithm is as accurate as the usual abstract interpretation technique. One distinctive advantage of this approach is that it is not necessary to impose an abstract domain of a particular depth prior to the analysis: the lazy type algorithm will instead explore the part of a potentially infinite domain required to prove the strictness property. 1 Introduction Simple strictness analysis returns information about the fact that the result of a function application is undefined when some of the arguments are undefined. This information can be used in a compiler for a lazy functional language because the argument of a strict function can be evaluated (up to weak head normal form) and passed by value. However a more sophisticated property might be useful in the presence of lists or other recursive data structures which are pervasive in functio...
Strictness types: An inference algorithm and an application
, 1993
"... This report deals with strictness types, a way of recording whether a function needs its argument(s) or not. We shall present an inference system for assigning strictness types to expressions and subsequently we transform this system into an algorithm capable of annotating expressions with strictnes ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This report deals with strictness types, a way of recording whether a function needs its argument(s) or not. We shall present an inference system for assigning strictness types to expressions and subsequently we transform this system into an algorithm capable of annotating expressions with strictness types. We give an example of a transformation which can be optimized by means of these annotations, and finally we prove the correctness of the optimized transformation – at the same time proving the correctness of the annotation. Everything has been implemented; documentation can be found in appendix.
Projectionbased Program Analysis
, 1994
"... Projectionbased program analysis techniques are remarkable for their ability togive highly detailed and useful information not obtainable by other methods. The rst proposed projectionbased analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for bindingtime ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Projectionbased program analysis techniques are remarkable for their ability togive highly detailed and useful information not obtainable by other methods. The rst proposed projectionbased analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for bindingtime analysis � both techniques are restricted to analysis of rstorder monomorphic languages. Hughes and Launchbury generalised the strictness analysis technique, and Launchbury the bindingtime analysis technique, to handle polymorphic languages, again restricted to rst order. Other than a general approach to higherorder analysis suggested by Hughes, and an ad hoc implementation of higherorder bindingtime analysis by Mogensen, neither of which had any formal notion of correctness, there has been no successful generalisation to higherorder analysis. We present a complete redevelopment of monomorphic projectionbased program analysis from rst principles, starting by considering the analysis of functions (rather than programs) to establish bounds on the intrinsic power of projectionbased analysis, showing also that projectionbased analysis can capture interesting termination