Results 1  10
of
19
A cost calculus for parallel functional programming
 Journal of Parallel and Distributed Computing
, 1995
"... Abstract Building a cost calculus for a parallel program development environment is difficult because of the many degrees of freedom available in parallel implementations, and because of difficulties with compositionality. We present a strategy for building cost calculi for skeletonbased programmin ..."
Abstract

Cited by 58 (6 self)
 Add to MetaCart
Abstract Building a cost calculus for a parallel program development environment is difficult because of the many degrees of freedom available in parallel implementations, and because of difficulties with compositionality. We present a strategy for building cost calculi for skeletonbased programming languages which can be used for derivational software development and which deals in a pragmatic way with the difficulties of composition. The approach is illustrated for the BirdMeertens theory of lists, a parallel functional language with an associated equational transformation system. Keywords: functional programming, parallel programming, program transformation, cost calculus, equational theories, architecture independence, BirdMeertens formalism.
Static dependent costs for estimating execution time
 In Proc. of the 1994 ACM Conference on LISP and functional programming
, 1994
"... We present the first system for estimating and using datadependent expression execution times in a language with firstclass procedures and imperative constructs. Thepresence of firstclass procedures and imperative constructs makes cost estimation a global problem that can benefit from type informa ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
We present the first system for estimating and using datadependent expression execution times in a language with firstclass procedures and imperative constructs. Thepresence of firstclass procedures and imperative constructs makes cost estimation a global problem that can benefit from type information. We estimate expression costs with the aid of an algebraic type reconstruction system that assigns every procedure atype that includes a static dependent cost. A static dependent cost describes the execution time of a procedure in terms of its inputs. In particular, a procedure’s static dependent cost can depend on the size of input data structures and the cost of input firstclass procedures. Our cost system produces symbolic cost expressions that contain free variables describing the size and cost of the procedure’s inputs. At runtime, a cost estimate is dynamically computed from the statically determined cost expression and runtime cost and size information. We present experimental results that validate our cost system onthreecompilers and architectures. We experimentally demonstrate the utility of cost estimates in making dynamic parallelization decisions. In our experience, dynamic parallelization meets or exceeds the parallel performance of any fixed number of processors. 1
A sized time system for a parallel functional language
 In Proc. Implementation of Functional Langs.(IFL ’02
, 2003
"... This paper describes an inference system, whose purpose is to determine the cost of evaluating expressions in a strict purely functional language. Upper bounds can be derived for both computation cost and the size of data structures. We outline a static analysis based on this inference system for in ..."
Abstract

Cited by 24 (14 self)
 Add to MetaCart
This paper describes an inference system, whose purpose is to determine the cost of evaluating expressions in a strict purely functional language. Upper bounds can be derived for both computation cost and the size of data structures. We outline a static analysis based on this inference system for inferring size and cost information. The analysis is a synthesis of the sized types of Hughes et al., and the polymorphic time system of Dornic et al., which was extended to static dependent costs by Reistad and Gifford. Our main interest in cost information is for scheduling tasks in the parallel execution of functional languages. Using the GranSim parallel simulator, we show that the information provided by our analysis is sufficient to characterise relative task granularities for a simple functional program. This information can be used in the runtimesystem of the Glasgow Parallel Haskell compiler to improve dynamic program performance. 1
Using the RunTime Sizes of Data Structures to Guide ParallelThread Creation
 IN PROCEEDINGS OF THE ACM CONFERENCE ON LISP AND FUNCTIONAL PROGRAMMING
, 1994
"... Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to threadcreation costs should evaluate sequentially for maximum performance. Static identification of such threa ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to threadcreation costs should evaluate sequentially for maximum performance. Static identification of such threads is however difficult. Therefore, dynamic granularity estimation has compiletime and runtime components: Abstract interpretation statically identifies functions whose complexity depends on data structure sizes; the runtime system maintains approximations to these sizes. Compilerinserted checks consult this size information to make thread creation decisions dynamically. We describe dynamic granularity estimation for a listbased functional language. Extension to general recursive data structures and imperative operations is possible. Performance measurements of dynamic granularity estimation in a parallel ML implementation on a sharedmemory machine demonstrate the possibility of large...
The essence of monotonic state
, 2009
"... We extend a static typeandcapability system with new mechanisms for expressing the promise that a certain abstract value evolves monotonically with time; for enforcing this promise; and for taking advantage of this promise to establish nontrivial properties of programs. These mechanisms are inde ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We extend a static typeandcapability system with new mechanisms for expressing the promise that a certain abstract value evolves monotonically with time; for enforcing this promise; and for taking advantage of this promise to establish nontrivial properties of programs. These mechanisms are independent of the treatment of mutable state, but combine with it to offer a flexible account of “monotonic state”. To demonstrate their use, we present a simple yet challenging example, namely monotonic integer counters. We then show how an implementation of thunks in terms of references can be assigned types that reflect time complexity properties, in the style of Danielsson (2008). This offers a foundational explanation of Danielsson’s system and, at the same time, extends it to a calculus with mutable state. Last, we sketch an application to hashconsing.
Effect Systems with Subtyping
"... Effect systems extend classical type systems with effect information. Just as types describe the possible values of expressions, effects describe their possible evaluation behaviors. Effects, which appear in function types, introduce new constraints on the typability of expressions. To increase the ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Effect systems extend classical type systems with effect information. Just as types describe the possible values of expressions, effects describe their possible evaluation behaviors. Effects, which appear in function types, introduce new constraints on the typability of expressions. To increase the exibility and accuracy of e ect systems, we present a new effect system based on subtyping. The subtype relation is induced by a subsumption relation on effects. This subtyping effect system avoids merging effect information together, thus collecting more precise effect information. We introduce a reconstruction algorithm which for any expression already typed with classical types, reconstructs its type and effect based on the subtype relation. The reconstruction algorithm is sound and complete w.r.t. the static semantics.
Cost Analysis using Automatic Size and Time Inference
 Implementation of Functional Languages, 14th International Workshop, IFL 2002
, 2002
"... Cost information can be exploited in a variety of contexts, including parallelizing compilers, autonomic GRIDs and realtime systems. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Cost information can be exploited in a variety of contexts, including parallelizing compilers, autonomic GRIDs and realtime systems.
A Parallel Complexity Model for Functional Languages
 IN: PROC. ACM CONF. ON FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
, 1994
"... A complexity model based on the calculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential" functional languages, and to relate ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A complexity model based on the calculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential" functional languages, and to relate these results to algorithms designed directly for parallel machine models. For example, the paper shows that equally good upper bounds can be achieved for merging two sorted sequences in the pure calculus with some arithmetic constants as in the EREW PRAM, when they are both mapped onto a more realistic machine such as a hypercube or butterfly network. In particular for n keys and p processors, they both result in an O(n=p + log 2 p) time algorithm. These results argue that it is possible to get good parallelism in functional languages without adding explicitly parallel constructs. In fact, the lack of random access seems to be a bigger problem than the lack of parallelism. This research...
Termination Analysis based on Operational Semantics
, 1995
"... In principle termination analysis is easy: find a wellfounded partial order and prove that calls decrease with respect to this order. In practice this often requires an oracle (or a theorem prover) for determining the wellfounded order and this oracle may not be easily implementable. Our approach ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In principle termination analysis is easy: find a wellfounded partial order and prove that calls decrease with respect to this order. In practice this often requires an oracle (or a theorem prover) for determining the wellfounded order and this oracle may not be easily implementable. Our approach circumvents some of these problems by exploiting the inductive definition of algebraic data types and using pattern matching as in functional languages. We develop a termination analysis for a higherorder functional language; the analysis incorporates and extends polymorphic type inference and axiomatizes a class of wellfounded partial orders for multipleargument functions (as in Standard ML and Miranda). Semantics is given by means of operational (naturalstyle) semantics and soundness is proved; this involves making extensions to the semantic universe and we relate this to the techniques of denotational semantics. For dealing with the partiality aspects of the soundness proof it suffice...
Separate Polyvariant Binding Time Reconstruction
 CRI Report A/261, Ecole des Mines
, 1994
"... Binding time analysis aims at determining which identifiers can be bound to their values at compile time. This binding time information is of utmost importance when performing partial evaluation or constant folding on programs. Existing binding time analyses are global in that they require complete ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Binding time analysis aims at determining which identifiers can be bound to their values at compile time. This binding time information is of utmost importance when performing partial evaluation or constant folding on programs. Existing binding time analyses are global in that they require complete program texts and descriptions of which of their inputs are available at compile time. As a consequence, such analyses cannot be used in programming languages that support modules or separate compilation. Libraries have to be analyzed every time they are used in some program. This is particularly limiting when considering programming inthelarge; any modification of an application results in the reprocessing of all the modules. This paper presents a new static analysis for higherorder typed functional languages that relies on a type and e#ect system to obtain polyvariant and separate binding time information. By allowing function types to be parametrized over the binding times of their arg...