## A sized time system for a parallel functional language (2003)

Venue: | In Proc. Implementation of Functional Langs.(IFL ’02 |

Citations: | 28 - 16 self |

### BibTeX

@INPROCEEDINGS{Loidl03asized,

author = {Hans-wolfgang Loidl and Kevin Hammond},

title = {A sized time system for a parallel functional language},

booktitle = {In Proc. Implementation of Functional Langs.(IFL ’02},

year = {2003},

pages = {8--10},

publisher = {Springer-Verlag}

}

### Years of Citing Articles

### OpenURL

### Abstract

This paper describes an inference system, whose purpose is to determine the cost of evaluating expressions in a strict purely functional language. Upper bounds can be derived for both computation cost and the size of data structures. We outline a static analysis based on this inference system for inferring size and cost information. The analysis is a synthesis of the sized types of Hughes et al., and the polymorphic time system of Dornic et al., which was extended to static dependent costs by Reistad and Gifford. Our main interest in cost information is for scheduling tasks in the parallel execution of functional languages. Using the GranSim parallel simulator, we show that the information provided by our analysis is sufficient to characterise relative task granularities for a simple functional program. This information can be used in the runtime-system of the Glasgow Parallel Haskell compiler to improve dynamic program performance. 1

### Citations

486 | The Omega Test: A fast practical integer programming algorithm for dependence analysis
- Pugh
- 1992
(Show Context)
Citation Context ...mputation system for solving recurrences (where this is possible) or by using a “database” of recurrences and their closed forms. 4. Solving the constraint set. For this step we use the Omega Library =-=[12]-=-. Inference and Simplification: In this example we perform simplification of the constraints on-the-fly. We only describe the main steps in the inference of the body ofdel: 1. Using the (App) rule twi... |

136 | Proving the correctness of reactive systems using sized types
- Hughes, Pareto, et al.
- 1996
(Show Context)
Citation Context ...wed that granularity information, if present, can be used to achieve a significant performance improvement. The system presented here is closely related to the sized types of Hughes, Pareto and Sabry =-=[6]-=- and to the time system developed by Dornic, Jouvelot and Gifford [2], which has been extended to a static dependent cost system by Reistad and Gifford [13]. However, unlike the latter system we can a... |

93 |
Mechanical program analysis
- Wegbreit
- 1975
(Show Context)
Citation Context ... kind are often encountered in the literature). We have no plans at present to implement such an automated scheme. 6 Related Work Pioneering work on automatic complexity analysis was done by Wegbreit =-=[17]-=-. His METRIC system can derive the average case complexity of a wide range of programs by solving the difference equations that occur as an intermediate step in the complexity analysis. However, this ... |

84 | Automatic complexity analysis - Rosendahl |

55 |
ACE: An Automatic Complexity Evaluator
- Metayer
- 1988
(Show Context)
Citation Context ...lexity analysis. However, this general approach is very expensive and therefore only possible in an off-line algorithm (and not in a static analysis that can be be performed by a compiler). LeMetayer =-=[8]-=- takes a similar approach based on program transformation: he uses a set of rewrite rules to derive complexity functions, simplify them and to finally eliminate recursion. His ACE system works on FP p... |

50 | Complexity analysis for a lazy higher-order language
- Sands
- 1990
(Show Context)
Citation Context ...an be done by using projections, modelling how much of a data structure is needed in a certain context [16]. The closest to a cost analysis for a lazy language is the cost calculus developed by Sands =-=[15]-=-. 7 Conclusions The sized time system introduced in this paper is the basis for performing a static cost analysis of expressions in a simple strict, polymorphic, higher-order language L. The basic str... |

49 | Static dependent costs for estimating execution time
- Reistad, Gifford
- 1994
(Show Context)
Citation Context ... the sized types of Hughes, Pareto and Sabry [6] and to the time system developed by Dornic, Jouvelot and Gifford [2], which has been extended to a static dependent cost system by Reistad and Gifford =-=[13]-=-. However, unlike the latter system we can also derive cost information for some recursive functions. This is possible by extending the standard subtype inference mechanism with a “database” of known ... |

37 | Strictness Analysis Aids Time Analysis
- Wadler
- 1988
(Show Context)
Citation Context ...anguage has to take the context of an expression into account to make it compositional[1]. This can be done by using projections, modelling how much of a data structure is needed in a certain context =-=[16]-=-. The closest to a cost analysis for a lazy language is the cost calculus developed by Sands [15]. 7 Conclusions The sized time system introduced in this paper is the basis for performing a static cos... |

29 |
High-performance parallel graph reduction
- Jones, Clack, et al.
- 1989
(Show Context)
Citation Context ...t granularity information The runtime improvement for rather small latencies is due to the creation of many tiny tasks before the runtime system automatically discards these sparks as being worthless =-=[11]-=-. Therefore, there is more to gain by making the “right” decision when sparking. Unexpectedly, however, at medium latencies, priority scheduling yields worse performance than using no granularity info... |

27 |
A compositional approach to time analysis of first order lazy functional programs
- Bjerner, Holmström
- 1989
(Show Context)
Citation Context ...sed on a cost model for a lazy language. These heuristics work only in certain cases. A cost analysis for a lazy language has to take the context of an expression into account to make it compositional=-=[1]-=-. This can be done by using projections, modelling how much of a data structure is needed in a certain context [16]. The closest to a cost analysis for a lazy language is the cost calculus developed b... |

24 | Visualising granularity in parallel programs: A graphical winnowing system for haskell
- Hammond, Loidl, et al.
- 1995
(Show Context)
Citation Context ...at direction. We demonstrate the feasibility of our approach in Section 4 by giving a worked example. The output of the analysis (performed by hand) has been passed to our parallel simulator, GranSim =-=[3]-=-. For this example, there is a demonstrable performance improvement when using our previously developed granularity control mechanisms. This confirms previous results, where a hand-annotated program w... |

23 | Polymorphic time systems for estimating program complexity
- Dornic, Jouvelot, et al.
- 1992
(Show Context)
Citation Context ...a significant performance improvement. The system presented here is closely related to the sized types of Hughes, Pareto and Sabry [6] and to the time system developed by Dornic, Jouvelot and Gifford =-=[2]-=-, which has been extended to a static dependent cost system by Reistad and Gifford [13]. However, unlike the latter system we can also derive cost information for some recursive functions. This is pos... |

16 |
Using Run-Time List Sizes to Guide Parallel Thread Creation
- Huelsbergen, Larus, et al.
- 1994
(Show Context)
Citation Context ...defined an abstract interpretation (“dynamic granularity estimation”) of a higher-order, strict language for determining computation costs, which uses dynamic estimates of the size of data structures =-=[5]-=-. Their analysis uses the well-known trick of iteration in the abstract interpretation stops as soon as a certain bound for the computation costs of an expression is surpassed. This prevents non-termi... |

14 | On the granularity of divide-and-conquer parallelism
- Loidl, Hammond
- 1995
(Show Context)
Citation Context ...le, there is a demonstrable performance improvement when using our previously developed granularity control mechanisms. This confirms previous results, where a hand-annotated program was measured. In =-=[9]-=- we showed that granularity information, if present, can be used to achieve a significant performance improvement. The system presented here is closely related to the sized types of Hughes, Pareto and... |

9 |
Distributed execution of functional programs using serial combinators
- Hudak, Goldberg
- 1985
(Show Context)
Citation Context ...to analyse the costs of user-defined recursive functions. Only a few authors have attempted to derive cost information from a lazy language in order to use it in a parallel system. Hudak and Goldberg =-=[4]-=- developed heuristics for improving the granularity of parallel threads based on a cost model for a lazy language. These heuristics work only in certain cases. A cost analysis for a lazy language has ... |

2 |
A Projection-Based Strictness Analyser for a Haskell Compiler
- Kubiak, Hughes, et al.
- 1992
(Show Context)
Citation Context ...parallel. The cost analysis could then use this strictness information in order to decide which expressions to evaluate in parallel. Therefore, a combination of a projection based strictness analysis =-=[7]-=- with a size and cost analysis would be the most promising approach for integrating our sized time system into an implicitly parallel lazy functional language. In this paper we have focused on the fea... |

1 |
Finding Closed-Form Solutionsof Difference Equations by Symbolic Methods
- Petkovsek
- 1990
(Show Context)
Citation Context ...form). In general, a cost function may involve arbitrarily complex recurrences, but with the current state-of-the-art it is only possible to find closed forms for the following classes of recurrences =-=[10]-=-: � linear recurrences with constant coefficients; � homogeneous linear recurrences with polynomial coefficients; � certain divide-and-conquer recurrences; � certain non-linear first order recurrences... |