Results 11  20
of
34
Space Profiling for Parallel Functional Programs
"... This paper presents a semantic space profiler for parallel functional programs. Building on previous work in sequential profiling, our tools help programmers to relate runtime resource use back to program source code. Unlike many profiling tools, our profiler is based on a cost semantics. This provi ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a semantic space profiler for parallel functional programs. Building on previous work in sequential profiling, our tools help programmers to relate runtime resource use back to program source code. Unlike many profiling tools, our profiler is based on a cost semantics. This provides a means to reason about performance without requiring a detailed understanding of the compiler or runtime system. It also provides a specification for language implementers. This is critical in that it enables us to separate cleanly the performance of the application from that of the language implementation. Some aspects of the implementation can have significant effects on performance. Our cost semantics enables programmers to understand the impact of different scheduling policies yet abstracts away from many of the details of their implementations. We show applications where the choice of scheduling policy has asymptotic effects on space use. We explain these use patterns through a demonstration of our tools. We also validate our methodology by observing similar performance in our implementation of a parallel extension of Standard ML.
Quantitative Observables and Averages in Probabilistic Constraint Programming
 New Trends in Constraints, number 1865 in Lecture Notes in Computer Science
, 1999
"... We investigate notions of observable behaviour of programs which include quantitative aspects of computation along with the most commonly assumed qualitative ones. We model these notions by means of a transition system where transitions occur with a given probability and an associated `cost' ex ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We investigate notions of observable behaviour of programs which include quantitative aspects of computation along with the most commonly assumed qualitative ones. We model these notions by means of a transition system where transitions occur with a given probability and an associated `cost' expressing some complexity measure (e.g. running time or, in general, resources consumption).
Improvement Theory and its Applications
 HIGHER ORDER OPERATIONAL TECHNIQUES IN SEMANTICS, PUBLICATIONS OF THE NEWTON INSTITUTE
, 1997
"... An improvement theory is a variant of the standard theories of observational approximation (or equivalence) in which the basic observations made of a functional program's execution include some intensionalinformation about, for example, the program's computational cost. One program is an i ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
An improvement theory is a variant of the standard theories of observational approximation (or equivalence) in which the basic observations made of a functional program's execution include some intensionalinformation about, for example, the program's computational cost. One program is an improvement of another if its execution is more efficient in any program context. In this article we give an overview of our work on the theory and applications of improvement. Applications include reasoning about time properties of functional programs, and proving the correctness of program transformation methods. We also introduce a new application, in the form of some bisimulationlike proof techniques for equivalence, with something of the flavour of Sangiorgi's "bisimulation upto expansion and context".
Implicit SelfAdjusting Computation for Purely Functional Programs
"... Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important subject of study in programming languages. Building on this work, recent advances in selfadjusting computation have developed techniques that enable programs to ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important subject of study in programming languages. Building on this work, recent advances in selfadjusting computation have developed techniques that enable programs to respond automatically and efficiently to dynamic changes in their inputs. Selfadjusting programs have been shown to be efficient for a reasonably broad range of problems but the approach still requires an explicit programming style, where the programmer must use specific monadic types and primitives to identify, create and operate on data that can change over time. We describe techniques for automatically translating purely functional programs into selfadjusting programs. In this implicit approach, the programmer need only annotate the (toplevel) input types of the programs to be translated. Type inference finds all other types, and a typedirected translation rewrites the source program into an explicitly selfadjusting target program. The type system is related to informationflow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs welltyped selfadjusting programs and preserves the source program’s inputoutput behavior, guaranteeing that translated programs respond correctly to all changes to their data. Using a cost semantics, we also prove that the translation preserves the asymptotic complexity of the source program.
A consistent semantics of selfadjusting computation
, 2006
"... Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a co ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics combines memoization and changepropagation, it involves both nondeterminism and mutation. Our consistency theorem states that the nondeterminism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: selfadjusting programs are consistent with purely functional programming. We formalized the semantics and its metatheory in the LF logical framework and machinechecked the proofs in Twelf. 1
Tuning Task Granularity and Data Locality of Data Parallel GpH Programs
, 2001
"... The performance of data parallel programs often hinges on two key coordination aspects: the computational costs of the parallel tasks relative to their management overhead  task granularity ; and the communication costs induced by the distance between tasks and their data  data locality . In da ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
The performance of data parallel programs often hinges on two key coordination aspects: the computational costs of the parallel tasks relative to their management overhead  task granularity ; and the communication costs induced by the distance between tasks and their data  data locality . In data parallel programs both granularity and locality can be improved by clustering, i.e. arranging for parallel tasks to operate on related subcollections of data.
A Parallel Complexity Model for Functional Languages
 IN: PROC. ACM CONF. ON FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
, 1994
"... A complexity model based on the calculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential" functional languages, and ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A complexity model based on the calculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential" functional languages, and to relate these results to algorithms designed directly for parallel machine models. For example, the paper shows that equally good upper bounds can be achieved for merging two sorted sequences in the pure calculus with some arithmetic constants as in the EREW PRAM, when they are both mapped onto a more realistic machine such as a hypercube or butterfly network. In particular for n keys and p processors, they both result in an O(n=p + log 2 p) time algorithm. These results argue that it is possible to get good parallelism in functional languages without adding explicitly parallel constructs. In fact, the lack of random access seems to be a bigger problem than the lack of parallelism. This research...
Termination Analysis based on Operational Semantics
, 1995
"... In principle termination analysis is easy: find a wellfounded partial order and prove that calls decrease with respect to this order. In practice this often requires an oracle (or a theorem prover) for determining the wellfounded order and this oracle may not be easily implementable. Our approach ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In principle termination analysis is easy: find a wellfounded partial order and prove that calls decrease with respect to this order. In practice this often requires an oracle (or a theorem prover) for determining the wellfounded order and this oracle may not be easily implementable. Our approach circumvents some of these problems by exploiting the inductive definition of algebraic data types and using pattern matching as in functional languages. We develop a termination analysis for a higherorder functional language; the analysis incorporates and extends polymorphic type inference and axiomatizes a class of wellfounded partial orders for multipleargument functions (as in Standard ML and Miranda). Semantics is given by means of operational (naturalstyle) semantics and soundness is proved; this involves making extensions to the semantic universe and we relate this to the techniques of denotational semantics. For dealing with the partiality aspects of the soundness proof it suffice...
Projectionbased Program Analysis
, 1994
"... Projectionbased program analysis techniques are remarkable for their ability togive highly detailed and useful information not obtainable by other methods. The rst proposed projectionbased analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for bindingtime ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Projectionbased program analysis techniques are remarkable for their ability togive highly detailed and useful information not obtainable by other methods. The rst proposed projectionbased analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for bindingtime analysis � both techniques are restricted to analysis of rstorder monomorphic languages. Hughes and Launchbury generalised the strictness analysis technique, and Launchbury the bindingtime analysis technique, to handle polymorphic languages, again restricted to rst order. Other than a general approach to higherorder analysis suggested by Hughes, and an ad hoc implementation of higherorder bindingtime analysis by Mogensen, neither of which had any formal notion of correctness, there has been no successful generalisation to higherorder analysis. We present a complete redevelopment of monomorphic projectionbased program analysis from rst principles, starting by considering the analysis of functions (rather than programs) to establish bounds on the intrinsic power of projectionbased analysis, showing also that projectionbased analysis can capture interesting termination
Adventures in time and space
 33th ACM Symposium on Principles of Programming Languages
, 2006
"... Abstract. This paper investigates what is essentially a callbyvalue version of PCF under a complexitytheoretically motivated type system. The programming formalism, ATR, has its firstorder programs characterize the polynomialtime computable functions, and its secondorder programs characterize ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper investigates what is essentially a callbyvalue version of PCF under a complexitytheoretically motivated type system. The programming formalism, ATR, has its firstorder programs characterize the polynomialtime computable functions, and its secondorder programs characterize the type2 basic feasible functionals of Mehlhorn and of Cook and Urquhart. (The ATRtypes are confined to levels 0, 1, and 2.) The type system comes in two parts, one that primarily restricts the sizes of values of expressions and a second that primarily restricts the time required to evaluate expressions. The sizerestricted part is motivated by Bellantoni and Cook’s and Leivant’s implicit characterizations of polynomialtime. The timerestricting part is an affine version of Barber and Plotkin’s DILL. Two semantics are constructed for ATR. The first is a pruning of the naïve denotational semantics for ATR. This pruning removes certain functions that cause otherwise feasible forms of recursion to go wrong. The second semantics is a model for ATR’s time complexity relative to a certain abstract machine. This model provides a setting for complexity recurrences arising from ATR recursions, the solutions of which yield secondorder polynomial time bounds. The timecomplexity semantics is also shown to be sound relative to the costs of interpretation on the abstract machine. 1.