Results 1 
7 of
7
Deriving algorithms from type inference systems: Application to strictness analysis
, 1994
"... The role of nonstandard type inference in static program analysis has been much studied recently. Early work emphasised the efficiency of type inference algorithms and paid little attention to the correctness of the inference system. Recently more powerful inference systems have been investigated b ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
The role of nonstandard type inference in static program analysis has been much studied recently. Early work emphasised the efficiency of type inference algorithms and paid little attention to the correctness of the inference system. Recently more powerful inference systems have been investigated but the connection with efficient inference algorithms has been obscured. The contribution of this paper is twofold: first we show how to transform a program logic into an algorithm and, second, we introduce the notion of lazy types and show how to derive an efficient algorithm for strictness analysis. 1 Introduction Two major formal frameworks have been proposed for static analysis of functional languages: abstract interpretation and type inference. A lot of work has been done to characterise formally the correctness and the power of abstract interpretation. However the development of algorithms has not kept pace with the theoretical developments. This is now a major barrier that is preven...
Polymorphic Strictness Analysis Using Frontiers
 Proceedings of the 1993 ACM on Partial Evaluation and SemanticsBased Program Manipulation (PEPM '93), ACM
, 1992
"... This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent nonbase uses of functions are dealt with by generalising their simplest instance an ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent nonbase uses of functions are dealt with by generalising their simplest instance analyses. This generalisation is done using an algorithm developed by Baraki, based on embeddingclosure pairs. Compared with an alternative approach of expanding the program out into a collection of monomorphic instances, this technique is hundreds of times faster for realistic programs. There are some approximations involved, but these do not seem to have a detrimental effect on the overall result. The overall effect of this technology is to considerably expand the range of programs for which the Frontiers algorithm gives useful results reasonably quickly. 1 Introduction The Frontiers algorithm was introduced in [CP85 ] as an allegedly efficient way of doing forwards strictness analysis, al...
Three Nondeterminism Analyses in a ParallelFunctional Language
, 2001
"... This paper is an extension of a previous work where two nondeterminism analyses were presented. The first of them was efficient (linear) but not very powerful and the second one was more powerful but very expensive (exponential). Here, we develop an intermediate analysis in both aspects, efficiency ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper is an extension of a previous work where two nondeterminism analyses were presented. The first of them was efficient (linear) but not very powerful and the second one was more powerful but very expensive (exponential). Here, we develop an intermediate analysis in both aspects, efficiency and power. The improvement in efficiency is obtained by speeding up the fixpoint calculation by means of a widening operator, and the representation of functions through easily comparable signatures. Also details about the implementation and its cost are given. Additionally: (1) the second and third analyses are completed with polymorphism, (2) we prove that the domains in the second and third analyses form a category in which the morphisms are embeddingclosure pairs of functions; respectively called abstraction and concretisation functions; and (3) we formally relate the analyses and prove that the first analysis is a safe approximation to the third one and that the third one is a safe approximation to the second one. In this way the three analyses become totally ordered by increasing cost and precision. 1
Lazy Type Inference for the Strictness Analysis of Lists
, 1994
"... We present a type inference system for the strictness analysis of lists and we show that it can be used as the basis for an efficient algorithm. The algorithm is as accurate as the usual abstract interpretation technique. One distinctive advantage of this approach is that it is not necessary to impo ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We present a type inference system for the strictness analysis of lists and we show that it can be used as the basis for an efficient algorithm. The algorithm is as accurate as the usual abstract interpretation technique. One distinctive advantage of this approach is that it is not necessary to impose an abstract domain of a particular depth prior to the analysis: the lazy type algorithm will instead explore the part of a potentially infinite domain required to prove the strictness property. 1 Introduction Simple strictness analysis returns information about the fact that the result of a function application is undefined when some of the arguments are undefined. This information can be used in a compiler for a lazy functional language because the argument of a strict function can be evaluated (up to weak head normal form) and passed by value. However a more sophisticated property might be useful in the presence of lists or other recursive data structures which are pervasive in functio...
Verification of Embedded Software: Problems and Perspectives
 Proceedings of the 1st International Workshop on Embedded Software (EMSOFT), USA. LNCS 2211, SpringerVerlag
, 2001
"... Computer aided formal methods have been very successful for the verification or at least enhanced debugging of hardware. The cost of correction of a hardware bug is huge enough to justify high investments in alternatives to testing such as correctness verification. This is not the case for software ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Computer aided formal methods have been very successful for the verification or at least enhanced debugging of hardware. The cost of correction of a hardware bug is huge enough to justify high investments in alternatives to testing such as correctness verification. This is not the case for software for which bugs are a quite common situation which can be easily handled through online updates. However in the area of embedded software, errors are hardly tolerable. Such embedded software is often safetycritical, so that a software failure might create a safety hazard in the equipment and put human life in danger. Thus embedded software verification is a research area of growing importance. Present day software verification technology can certainly be useful but is yet too limited to cope with the formidable challenge of complete software verification. We highlight some of the problems to be solved and envision possible abstract interpretation based static analysis solutions.
A Comparison between three Nondeterminism Analyses in ParallelFunctional Language
 IN PRIMERAS JORNADAS SOBRE PROGRAMACION Y LENGUAJES, PROLE'01
, 2001
"... The paper compares three analyses to determine when an Eden expression is sure to be deterministic, and when it may be nondeterministic. This work extends ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The paper compares three analyses to determine when an Eden expression is sure to be deterministic, and when it may be nondeterministic. This work extends
A Syntactic Method for Finding Least Fixed Points of HigherOrder Functions Over Finite Domains
, 1997
"... This paper describes a method for finding the least fixed points of higherorder functions over finite domains using symbolic manipulation. Fixed point finding is an essential component in the calculation of abstract semantics of functional programs, providing the foundation for progra ..."
Abstract
 Add to MetaCart
<F4.106e+05> This paper describes a method for finding the least fixed points of higherorder functions over finite domains using symbolic manipulation. Fixed point finding is an essential component in the calculation of abstract semantics of functional programs, providing the foundation for program analyses based on abstract interpretation. Previous methods for fixed point finding have primarily used semantic approaches, which often must traverse large portions of the semantic domain even for simple programs. This paper provides the theoretical framework for a syntaxbased analysis that is potentially very fast. The proposed syntactic method is based on an augmented simply typed lambda calculus where the symbolic representation of each function produced in the fixed point iteration is transformed to a syntactic normal form. Normal forms resulting from successive iterations are then compared syntactically to determine their ordering in the semantic domain, and to decide whether a fixe...