Results 1  10
of
32
The BirdMeertens Formalism as a Parallel Model
 Software for Parallel Computation, volume 106 of NATO ASI Series F
, 1993
"... The expense of developing and maintaining software is the major obstacle to the routine use of parallel computation. Architecture independent programming offers a way of avoiding the problem, but the requirements for a model of parallel computation that will permit it are demanding. The BirdMeertens ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
The expense of developing and maintaining software is the major obstacle to the routine use of parallel computation. Architecture independent programming offers a way of avoiding the problem, but the requirements for a model of parallel computation that will permit it are demanding. The BirdMeertens formalism is an approach to developing and executing dataparallel programs; it encourages software development by equational transformation; it can be implemented efficiently across a wide range of architecture families; and it can be equipped with a realistic cost calculus, so that tradeoffs in software design can be explored before implementation. It makes an ideal model of parallel computation. Keywords: General purpose parallel computing, models of parallel computation, architecture independent programming, categorical data type, program transformation, code generation. 1 Properties of Models of Parallel Computation Parallel computation is still the domain of researchers and those ...
Deriving incremental programs
, 1993
"... A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of cachin ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of caching, in order to provide a degree of incrementality not otherwise achievable by a generic incremental evaluator. 1
A Naïve Time Analysis and its Theory of Cost Equivalence
 Journal of Logic and Computation
, 1995
"... Techniques for reasoning about extensional properties of functional programs are well understood, but methods for analysing the underlying intensional or operational properties have been much neglected. This paper begins with the development of a simple but useful calculus for time analysis of nons ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
Techniques for reasoning about extensional properties of functional programs are well understood, but methods for analysing the underlying intensional or operational properties have been much neglected. This paper begins with the development of a simple but useful calculus for time analysis of nonstrict functional programs with lazy lists. One limitation of this basic calculus is that the ordinary equational reasoning on functional programs is not valid. In order to buy back some of these equational properties we develop a nonstandard operational equivalence relation called cost equivalence, by considering the number of computation steps as an `observable' component of the evaluation process. We define this relation by analogy with Park's definition of bisimulation in CCS. This formulation allows us to show that cost equivalence is a contextual congruence (and thus is substitutive with respect to the basic calculus) and provides useful proof techniques for establishing costequivalen...
Automatic Accurate TimeBound Analysis for HighLevel Languages
 In Proceedings of the ACM SIGPLAN 1998 Workshop on Languages, Compilers, and Tools for Embedded Systems, volume 1474 of Lecture Notes in Computer Science
, 1998
"... This paper describes a general approach for automatic and accurate timebound analysis. The approach consists of transformations for building timebound functions in the presence of partially known input structures, symbolic evaluation of the timebound function based on input parameters, optimizati ..."
Abstract

Cited by 38 (9 self)
 Add to MetaCart
This paper describes a general approach for automatic and accurate timebound analysis. The approach consists of transformations for building timebound functions in the presence of partially known input structures, symbolic evaluation of the timebound function based on input parameters, optimizations to make the overall analysis efficient as well as accurate, and measurements of primitive parameters, all at the sourcelanguage level. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The measured worstcase times are closely bounded by the calculated bounds. 1 Introduction Analysis of program running time is important for realtime systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. It has been extensively studied in many fields of computer science: algorithms [20, 12, 13, 41], programming languages [38, 21, 30, 33], and systems [35, 28, 32, 31]. It is particularl...
A sized time system for a parallel functional language
 In Proc. Implementation of Functional Langs.(IFL ’02
, 2003
"... This paper describes an inference system, whose purpose is to determine the cost of evaluating expressions in a strict purely functional language. Upper bounds can be derived for both computation cost and the size of data structures. We outline a static analysis based on this inference system for in ..."
Abstract

Cited by 24 (14 self)
 Add to MetaCart
This paper describes an inference system, whose purpose is to determine the cost of evaluating expressions in a strict purely functional language. Upper bounds can be derived for both computation cost and the size of data structures. We outline a static analysis based on this inference system for inferring size and cost information. The analysis is a synthesis of the sized types of Hughes et al., and the polymorphic time system of Dornic et al., which was extended to static dependent costs by Reistad and Gifford. Our main interest in cost information is for scheduling tasks in the parallel execution of functional languages. Using the GranSim parallel simulator, we show that the information provided by our analysis is sufficient to characterise relative task granularities for a simple functional program. This information can be used in the runtimesystem of the Glasgow Parallel Haskell compiler to improve dynamic program performance. 1
Automatic timebound analysis for a higherorder language
 In Proceedings of the ACM SIGPLAN 2002 Workshop on Partial Evaluation and SemanticsBased Program Manipulation
, 2002
"... Analysis of program running time is important for reactive systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. It has been extensively studied in many elds of computer science: algorithms [21, 12, 13,40], programming languages [38, ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Analysis of program running time is important for reactive systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. It has been extensively studied in many elds of computer science: algorithms [21, 12, 13,40], programming languages [38, 22,31, 35, 34], and systems [36, 29,33,32]. Being able to predict accurate time bounds automatically and e ciently
Operational Theories of Improvement in Functional Languages (Extended Abstract)
 In Proceedings of the Fourth Glasgow Workshop on Functional Programming
, 1991
"... ) David Sands y Department of Computing, Imperial College 180 Queens Gate, London SW7 2BZ email: ds@uk.ac.ic.doc Abstract In this paper we address the technical foundations essential to the aim of providing a semantic basis for the formal treatment of relative efficiency in functional langu ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
) David Sands y Department of Computing, Imperial College 180 Queens Gate, London SW7 2BZ email: ds@uk.ac.ic.doc Abstract In this paper we address the technical foundations essential to the aim of providing a semantic basis for the formal treatment of relative efficiency in functional languages. For a general class of "functional" computation systems, we define a family of improvement preorderings which express, in a variety of ways, when one expression is more efficient than another. The main results of this paper build on Howe's study of equality in lazy computation systems, and are concerned with the question of when a given improvement relation is subject to the usual forms of (in)equational reasoning (so that, for example, we can improve an expression by improving any subexpression). For a general class of computation systems we establish conditions on the operators of the language which guarantee that an improvement relation is a precongruence. In addition, for...
Optimized Live Heap Bound Analysis
 In VMCAI 03, volume 2575 of LNCS
, 2001
"... This paper describes a general approach for optimized live heap space and live heap spacebound analyses for garbagecollected languages. ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
This paper describes a general approach for optimized live heap space and live heap spacebound analyses for garbagecollected languages.
Automatic Accurate CostBound Analysis for HighLevel Languages
, 2001
"... This paper describes a languagebased approach for automatic and accurate costbound analysis. The approach consists of transformations for building costbound functions in the presence of partially known input structures, symbolic evaluation of the costbound function based on input size parameter ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
This paper describes a languagebased approach for automatic and accurate costbound analysis. The approach consists of transformations for building costbound functions in the presence of partially known input structures, symbolic evaluation of the costbound function based on input size parameters, and optimizations to make the overall analysis ecient as well as accurate, all at the sourcelanguage level. The calculated cost bounds are expressed in terms of primitive cost parameters. These parameters can be obtained based on the language implementation or be measured conservatively or approximately, yielding accurate, conservative, or approximate time or space bounds. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The results helped conrm the accuracy of the analysis.
Using the RunTime Sizes of Data Structures to Guide ParallelThread Creation
 IN PROCEEDINGS OF THE ACM CONFERENCE ON LISP AND FUNCTIONAL PROGRAMMING
, 1994
"... Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to threadcreation costs should evaluate sequentially for maximum performance. Static identification of such threa ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to threadcreation costs should evaluate sequentially for maximum performance. Static identification of such threads is however difficult. Therefore, dynamic granularity estimation has compiletime and runtime components: Abstract interpretation statically identifies functions whose complexity depends on data structure sizes; the runtime system maintains approximations to these sizes. Compilerinserted checks consult this size information to make thread creation decisions dynamically. We describe dynamic granularity estimation for a listbased functional language. Extension to general recursive data structures and imperative operations is possible. Performance measurements of dynamic granularity estimation in a parallel ML implementation on a sharedmemory machine demonstrate the possibility of large...