Results 11  20
of
26
A Second Year Course on Data Structures Based on Functional Programming
 Nijmegen (The Netherlands
, 1995
"... . In this paper, we make a proposal for a second year course on advanced programming, based on the functional paradigm. It assumes the existence of a first course on programming, also based on functional languages. Its main subject is data structures. We claim that advanced data structures and algo ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
. In this paper, we make a proposal for a second year course on advanced programming, based on the functional paradigm. It assumes the existence of a first course on programming, also based on functional languages. Its main subject is data structures. We claim that advanced data structures and algorithms can be better taught at the functional paradigm than at the imperative one, and that this can be done without losing efficiency. We also claim that, as a consequence of the higher level of abstraction of functional languages, more subjects can be covered in the given amount of time. In the paper, numerous examples of unusual data structures and algorithms are presented illustrating the contents and the philosophy of the proposed course. 1 Introduction The controversy about the use of a functional language as the first programming language is still alive. Several proposals have been made on a first programming course based on the functional paradigm or on a mixture of the functional a...
A Probabilistic Approach to the Problem of Automatic Selection of Data Representations
 In Proceedings of the 1996 ACM SIGPLAN International Conference on Functional Programming
, 1996
"... The design and implementation of efficient aggregate data structures has been an important issue in functional programming. It is not clear how to select a good representation for an aggregate when access patterns to the aggregate are highly variant, or even unpredictable. Previous approaches rely o ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
The design and implementation of efficient aggregate data structures has been an important issue in functional programming. It is not clear how to select a good representation for an aggregate when access patterns to the aggregate are highly variant, or even unpredictable. Previous approaches rely on compiletime analyses or programmer annotations. These methods can be unreliable because they try to predict program behaviors before they are executed. We propose a probabilistic approach, which is based on Markov processes, for automatic selection of data representations. The selection is modeled as a random process moving in a graph with weighted edges. The proposed approach employs coin tossing at runtime to aid choosing suitable data representations. The transition probability function used by the coin tossing is constructed in a simple and common way from a measured cost function. We show that, under this setting, random selection of data representations can be quite effective. Th...
Amortization, Lazy Evaluation, and Persistence: Lists with Catenation via Lazy Linking
 Pages 646654 of: IEEE Symposium on Foundations of Computer Science
, 1995
"... Amortization has been underutilized in the design of persistent data structures, largely because traditional accounting schemes break down in a persistent setting. Such schemes depend on saving "credits" for future use, but a persistent data structure may have multiple "futures", ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Amortization has been underutilized in the design of persistent data structures, largely because traditional accounting schemes break down in a persistent setting. Such schemes depend on saving "credits" for future use, but a persistent data structure may have multiple "futures", each competing for the same credits. We describe how lazy evaluation can often remedy this problem, yielding persistent data structures with good amortized efficiency. In fact, such data structures can be implemented purely functionally in any functional language supporting lazy evaluation. As an example of this technique, we present a purely functional (and therefore persistent) implementation of lists that simultaneously support catenation and all other usual list primitives in constant amortized time. This data structure is much simpler than the only existing data structure with comparable bounds, the recently discovered catenable lists of Kaplan and Tarjan, which support all operations in constant worstca...
Auburn: A Kit for Benchmarking Functional Data Structures
 LECTURE NOTES IN COMPUTER SCIENCE
, 1997
"... Benchmarking competing implementations of a data structure can be both tricky and time consuming. The efficiency of an implementation may depend critically on how it is used. This problem is compounded by persistence. All purely functional data structures are persistent. We present a kit that c ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Benchmarking competing implementations of a data structure can be both tricky and time consuming. The efficiency of an implementation may depend critically on how it is used. This problem is compounded by persistence. All purely functional data structures are persistent. We present a kit that can generate benchmarks for a given data structure. A benchmark is made from a description of how it should use an implementation of the data structure. The kit will improve the speed, ease and power of the process of benchmarking functional data structures.
Optimal Batch Schedules for Parallel Machines
"... Abstract. We consider the problem of batch scheduling on parallel machines where jobs have release times, deadlines, and identical processing times. The goal is to schedule these jobs in batches of size at most B on m identical machines. Previous work on this problem primarily focused on finding fea ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of batch scheduling on parallel machines where jobs have release times, deadlines, and identical processing times. The goal is to schedule these jobs in batches of size at most B on m identical machines. Previous work on this problem primarily focused on finding feasible schedules. Motivated by the problem of minimizing energy, we consider problems where the number of batches is significant. Minimizing the number of batches on a single processor previously required an impractical O(n 8) dynamic programming algorithm. We present a O(n 3) algorithm for simultaneously minimizing the number of batches and maximum completion time, and give improved guarantees for variants with infinite size batches, agreeable release times, and batch “budgets”. Finally, we give a pseudopolynomial algorithm for general batchcountsensitive objective functions and correct errors in previous results.
Benchmarking Purely Functional Data Structures
 Journal of Functional Programming
, 1999
"... When someone designs a new data structure, they want to know how well it performs. Previously, the only way to do this involves finding, coding and testing some applications to act as benchmarks. This can be tedious and timeconsuming. Worse, how a benchmark uses a data structure may considerably af ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
When someone designs a new data structure, they want to know how well it performs. Previously, the only way to do this involves finding, coding and testing some applications to act as benchmarks. This can be tedious and timeconsuming. Worse, how a benchmark uses a data structure may considerably affect the efficiency of the data structure. Thus, the choice of benchmarks may bias the results. For these reasons, new data structures developed for functional languages often pay little attention to empirical performance. We solve these problems by developing a benchmarking tool, Auburn, that can generate benchmarks across a fair distribution of uses. We precisely define "the use of a data structure", upon which we build the core algorithms of Auburn: how to generate a benchmark from a description of use, and how to extract a description of use from an application. We consider how best to use these algorithms to benchmark competing data structures. Finally, we test Auburn by benchmarking ...
Reflection without Remorse Revealing a hidden sequence to speed up monadic reflection
"... A series of list appends or monadic binds for many monads performs algorithmically worse when leftassociated. Continuationpassing style (CPS) is wellknown to cure this severe dependence of performance on the association pattern. The advantage of CPS dwindles or disappears if we have to examine o ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A series of list appends or monadic binds for many monads performs algorithmically worse when leftassociated. Continuationpassing style (CPS) is wellknown to cure this severe dependence of performance on the association pattern. The advantage of CPS dwindles or disappears if we have to examine or modify the intermediate result of a series of appends or binds, before continuing the series. Such examination is frequently needed, for example, to control search in nondeterminism monads. We present an alternative approach that is just as general as CPS but more robust: it makes series of binds and other such operations efficient regardless of the association pattern – and also provides efficient access to intermediate results. The key is to represent such a conceptual sequence as an efficient sequence data structure. Efficient sequence data structures from the literature are homogeneous and cannot be applied as they are in a typesafe way to series of monadic binds. We generalize them to type aligned sequences and show how to construct their (assuredly orderpreserving) implementations. We demonstrate that our solution solves previously undocumented, severe performance problems in iteratees, LogicT transformers, free monads and extensible effects. Categories and Subject Descriptors D.3.2 [Programming Lan
Loopless Functional Algorithms
, 2005
"... Loopless algorithms generate successive combinatorial patterns in constant time, producing the first in time linear to the size of input. Although originally formulated in an imperative setting, this thesis proposes a functional interpretation of these algorithms in the lazy language Haskell. Since ..."
Abstract
 Add to MetaCart
Loopless algorithms generate successive combinatorial patterns in constant time, producing the first in time linear to the size of input. Although originally formulated in an imperative setting, this thesis proposes a functional interpretation of these algorithms in the lazy language Haskell. Since it may not be possible to produce a pattern in constant time, a list of integers generated using the library function unfoldr determines the transitions between consecutive patterns. The generation of Gray codes, permutations, ideals of posets and combinations illustrate applications of loopless algorithms in both imperative and functional form, particularly derivations of the KodaRuskey and JohnsonTrotter algorithms. Common themes in the construction of loopless imperative algorithms, such as focus pointers, doubly linked lists and coroutines, contrast greatly with the functional uses of realtime queues, tree traversals, fusion and tupling.
Simple Confluently Persistent Catenable Lists.
, 1999
"... Abstract We consider the problem of maintaining persistent lists subject to concatenation and to insertions and deletions at both ends. Updates to a persistent data structure are nondestructive each operation produces a new list incorporating the change, while keeping intact the list or lists to wh ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We consider the problem of maintaining persistent lists subject to concatenation and to insertions and deletions at both ends. Updates to a persistent data structure are nondestructive each operation produces a new list incorporating the change, while keeping intact the list or lists to which it applies. Although general techniques exist for making data structures persistent, these techniques fail for structures that are subject to operations, such as catenation, that combine two or more versions. In this paper we develop a simple implementation of persistent doubleended queues with catenation that supports all deque operations in constant amortized time. Our implementation is functional if we allow memoization.