Results 11  20
of
35
Polymorphic Strictness Analysis Using Frontiers
 Proceedings of the 1993 ACM on Partial Evaluation and SemanticsBased Program Manipulation (PEPM '93), ACM
, 1992
"... This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent nonbase uses of functions are dealt with by generalising their simplest instance an ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent nonbase uses of functions are dealt with by generalising their simplest instance analyses. This generalisation is done using an algorithm developed by Baraki, based on embeddingclosure pairs. Compared with an alternative approach of expanding the program out into a collection of monomorphic instances, this technique is hundreds of times faster for realistic programs. There are some approximations involved, but these do not seem to have a detrimental effect on the overall result. The overall effect of this technology is to considerably expand the range of programs for which the Frontiers algorithm gives useful results reasonably quickly. 1 Introduction The Frontiers algorithm was introduced in [CP85 ] as an allegedly efficient way of doing forwards strictness analysis, al...
Three Nondeterminism Analyses in a ParallelFunctional Language
, 2001
"... This paper is an extension of a previous work where two nondeterminism analyses were presented. The first of them was efficient (linear) but not very powerful and the second one was more powerful but very expensive (exponential). Here, we develop an intermediate analysis in both aspects, efficiency ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This paper is an extension of a previous work where two nondeterminism analyses were presented. The first of them was efficient (linear) but not very powerful and the second one was more powerful but very expensive (exponential). Here, we develop an intermediate analysis in both aspects, efficiency and power. The improvement in efficiency is obtained by speeding up the fixpoint calculation by means of a widening operator, and the representation of functions through easily comparable signatures. Also details about the implementation and its cost are given. Additionally: (1) the second and third analyses are completed with polymorphism, (2) we prove that the domains in the second and third analyses form a category in which the morphisms are embeddingclosure pairs of functions; respectively called abstraction and concretisation functions; and (3) we formally relate the analyses and prove that the first analysis is a safe approximation to the third one and that the third one is a safe approximation to the second one. In this way the three analyses become totally ordered by increasing cost and precision. 1
Investigation of Algebraic Query Optimisation for Database Programming Languages
 In Proceedings of the 20th Int'l Conference on Very Large Databases (VLDB
, 1994
"... alexQdcs.kcl.ac.uk A major challenge still facing the designers and implementors of database programming languages (DBPLs) is that of query optimisation. We investigate algebraic query optimisation techniques for DBPLs in the context of a purely declarative functional language that supports sets a ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
alexQdcs.kcl.ac.uk A major challenge still facing the designers and implementors of database programming languages (DBPLs) is that of query optimisation. We investigate algebraic query optimisation techniques for DBPLs in the context of a purely declarative functional language that supports sets as firstclass objects. Since the language is computationally complete issues such as nontermination of expressions and construction of infinite data structures can be investigated, whilst its declarative nature allows the issue of side effects to be avoided and a richer set of equivalences to be developed. The support of a set bulk data type enables much prior work on the optimisation of relational languages to be utilised. Finally, the language has a welldefined semantics which permits us to reason formally about the prop erties of expressions, such as their equivalence with other expressions and their termination. 1
Projections for Polymorphic FirstOrder Strictness Analysis
 Math. Struct. in Comp. Science
, 1991
"... this paper, that results from this kind of analysis are, in a sense, polymorphic. This confirms an earlier conjecture [19], and shows how the technique can be applied to firstorder polymorphic functions. The paper is organised as follows. In the next section, we review projectionbased strictness a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
this paper, that results from this kind of analysis are, in a sense, polymorphic. This confirms an earlier conjecture [19], and shows how the technique can be applied to firstorder polymorphic functions. The paper is organised as follows. In the next section, we review projectionbased strictness analysis very briefly. In Section 3 we introduce the types we will be working with: they are the objects of a category. We show that parameterised types are functors, with certain cancellation properties. In Section 4 we define strong and weak polymorphism: polymorphic functions in programming languages are strongly polymorphic, but we will need to use projections with a slightly weaker property. We prove that, under certain conditions, weakly polymorphic functions are characterised by any nontrivial instance. We can therefore analyse one monomorphic instance of a polymorphic function using existing techniques, and apply the results to every instance. In Section 5 we choose a finite set of projections for each type, suitable for use in a practical compiler. We call these specially chosen projections contexts, and we show examples of factorising contexts for compound types in order to facilitate application of the results of Section 4. We give a number of examples of polymorphic strictness analysis. Finally, in Section 6 we discuss related work and draw some conclusions. 2. Projections for Strictness Analysis
A Syntactic Approach to Fixed Point Computation on Finite Domains
 In Proc. 1992 ACM Symposium on Lisp and Functional Programming
, 1992
"... We propose a syntactic approach to performing fixed point computation on finite domains. Finding fixed points in finite domains for monotonic functions is an essential task when calculating abstract semantics of functional programs. Previous methods for fixed point finding have been mainly based on ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We propose a syntactic approach to performing fixed point computation on finite domains. Finding fixed points in finite domains for monotonic functions is an essential task when calculating abstract semantics of functional programs. Previous methods for fixed point finding have been mainly based on semantic approaches which may be very inefficient even for simple programs. We outline the development of a syntactic approach, and show that the syntactic approach is sound and complete with respect to semantics. A few examples are provided to illustrate this syntactic approach. 1 Motivation and Introduction Finding fixed points for monotonic functions over finite domains is an important task in abstract interpretation. In abstract interpretation, a standard (or nonstandard) semantics of a functional program is abstracted to a monotonic function over finite domains, and, if the program contains recursive definitions, fixed point finding is used to calculate the abstract semantics of th...
What About the Natural Numbers
 Computer Languages
, 1989
"... A prime concern in the design of any general purpose programming language should be the ease and safety of working with natural numbers, particularly in conjunction with discrete data structures. This theme of commitment to the naturals as the basic numeric data type is explored in the context of a ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
A prime concern in the design of any general purpose programming language should be the ease and safety of working with natural numbers, particularly in conjunction with discrete data structures. This theme of commitment to the naturals as the basic numeric data type is explored in the context of a lazy functional language. NonTitle Keywords: structural correspondence, numeric types, total functions, closed systems, functional programming, lazy evaluation.
Partitioning Nonstrict Languages for Multithreaded Code Generation
 Master's thesis, Dept. of EECS, MIT
, 1994
"... In a nonstrict language, functions may return values before their arguments are available, and data structures may be defined before all their components are defined. Compiling such languages to conventional hardware is not straightforward; instructions do not have a fixed compile time ordering. Su ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In a nonstrict language, functions may return values before their arguments are available, and data structures may be defined before all their components are defined. Compiling such languages to conventional hardware is not straightforward; instructions do not have a fixed compile time ordering. Such an ordering is necessary to execute programs efficiently on current microprocessors. Partitioning is the process of compiling a nonstrict program into threads (i.e., a sequence of instructions). This process involves detecting data dependencies at compile time and using these dependencies to "sequentialize" parts of the program. Previous work on partitioning did not propagate dependence information across recursive procedure boundaries. Using a representation known as Paths we are able to represent dependence information of recursive functions. Also, we incorporate them into a known partitioning algorithm. However, this algorithm fails to make use of all the information contained in pat...
Flexible And Practical Flow Analysis for HigherOrder Programming Languages
, 1996
"... by ..."
(Show Context)
Partitioning Nonstrict Functional Languages for Multithreaded Code Generation
 In Proceedings of Static Analysis Symposium '95
, 1995
"... In this paper, we present a new approach to partitioning, the problem of generating sequential threads for programs written in a nonstrict functional language. The goal of partitioning is to generate threads as large as possible, while retaining the nonstrict semantics of the program. We define p ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper, we present a new approach to partitioning, the problem of generating sequential threads for programs written in a nonstrict functional language. The goal of partitioning is to generate threads as large as possible, while retaining the nonstrict semantics of the program. We define partitioning as a program transformation and design algorithms for basic block partitioning and interprocedural partitioning. The interprocedural algorithm presented here is more powerful than the ones previously known and is based on abstract interpretation, enabling the algorithm to handle recursion in a straightforward manner. We prove the correctness of these algorithms in a denotational semantic framework. Keywords: Partitioning, abstract interpretation, demand and tolerance sets, interprocedural analysis, nonstrict functional languages. 1 Introduction Functional programming languages can be divided into two classes: strict and nonstrict. In a nonstrict language, functions may r...
How Much Nonstrictness do Lenient Programs Require?
 In Conf. on Func. Prog. Languages and Computer Architecture
, 1995
"... Lenient languages, such as Id90, have been touted as among the best functional languages for massively parallel machines [AHN88]. Lenient evaluation combines nonstrict semantics with eager evaluation [Tra91]. Nonstrictness gives these languages more expressive power than strict semantics, while ea ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Lenient languages, such as Id90, have been touted as among the best functional languages for massively parallel machines [AHN88]. Lenient evaluation combines nonstrict semantics with eager evaluation [Tra91]. Nonstrictness gives these languages more expressive power than strict semantics, while eager evaluation ensures the highest degree of parallelism. Unfortunately, nonstrictness incurs a large overhead, as it requires dynamic scheduling and synchronization. As a result, many powerful program analysis techniques have been developed to statically determine when nonstrictness is not required [CPJ85, Tra91, Sch94]. This paper studies a large set of lenient programs and quantifies the degree of nonstrictness they require. We identify several forms of nonstrictness, including functional, conditional, and data structure nonstrictness. Surprisingly, most Id90 programs require neither functional nor conditional nonstrictness. Many benchmark programs, however, make use of a limited fo...