Results 1 
5 of
5
Benchmarking Purely Functional Data Structures
 Journal of Functional Programming
, 1999
"... When someone designs a new data structure, they want to know how well it performs. Previously, the only way to do this involves finding, coding and testing some applications to act as benchmarks. This can be tedious and timeconsuming. Worse, how a benchmark uses a data structure may considerably af ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
When someone designs a new data structure, they want to know how well it performs. Previously, the only way to do this involves finding, coding and testing some applications to act as benchmarks. This can be tedious and timeconsuming. Worse, how a benchmark uses a data structure may considerably affect the efficiency of the data structure. Thus, the choice of benchmarks may bias the results. For these reasons, new data structures developed for functional languages often pay little attention to empirical performance. We solve these problems by developing a benchmarking tool, Auburn, that can generate benchmarks across a fair distribution of uses. We precisely define "the use of a data structure", upon which we build the core algorithms of Auburn: how to generate a benchmark from a description of use, and how to extract a description of use from an application. We consider how best to use these algorithms to benchmark competing data structures. Finally, we test Auburn by benchmarking ...
Efficient Data Structures in a Lazy Functional Language
, 2003
"... Although a lot of theoretical work has been done on purely functional data structures, few of them have actually been implemented to general usefulness, let alone as part of a data structure library providing a uniform framework.
In 1998, Chris Okasaki started to change this by implementing Edison, ..."
Abstract
 Add to MetaCart
Although a lot of theoretical work has been done on purely functional data structures, few of them have actually been implemented to general usefulness, let alone as part of a data structure library providing a uniform framework.
In 1998, Chris Okasaki started to change this by implementing Edison, a library of efficient data structures for Haskell.
Unfortunately, he abandoned his work after creating a framework and writing some data structure implementations for parts of it.
This document first gives an overview of the current state of Edison and describes what efficiency in a lazy language means and how to measure it in a way that trades off complexity and precision to produce meaningful results.
These techniques are then applied to give an analysis of the sequence implementations present in Edison. Okasaki only briefly mentions the main characteristics of the data structures he has implemented, but to allow the user to choose the most efficient one for a given task, a more complete analysis seems needed.
To round off Edison's sequence part, four new implementations based on previously known theoretical work are presented and analysed: two deques based on the pairoflists approach, and two data structures that allow constant time appending, while preserving constant time for tail and, for one of them, even init.
To achieve a certain confidence in the correctness of the new implementations, we also present QuickCheck properties that not only check the operations behave as desired by the abstraction, but also allow data structure specific invariants to be tested, while being polymorphic.
Exploring Datatype Usage Space
, 1998
"... . Quantifying the use of a data structure makes benchmarking data structures easier and more reliable. We explore different ways of quantifying datatype usage. We present a basic solution and examine three extensions to this solution. 1 Motivation Suppose we have a selection of data structures perf ..."
Abstract
 Add to MetaCart
(Show Context)
. Quantifying the use of a data structure makes benchmarking data structures easier and more reliable. We explore different ways of quantifying datatype usage. We present a basic solution and examine three extensions to this solution. 1 Motivation Suppose we have a selection of data structures performing similar tasks. How do we compare their efficiency? Traditionally we might choose a few benchmarks, and run each benchmark with each data structure. This has two drawbacks. Firstly, it may be hard to find or create appropriate benchmarks. Secondly, the efficiency of a data structure may depend heavily on how it is used, though it may be unclear how a benchmark uses the data structure. However, if we could quantify use accurately, and create a benchmark for any given use, we could avoid both problems as follows. Imagine the space of possible uses of a data structure. Map out this space using a well chosen set of coordinates. Choose points at regular intervals in this space. For each poi...