Results 11  20
of
21
Amortization, Lazy Evaluation, and Persistence: Lists with Catenation via Lazy Linking
 Pages 646654 of: IEEE Symposium on Foundations of Computer Science
, 1995
"... Amortization has been underutilized in the design of persistent data structures, largely because traditional accounting schemes break down in a persistent setting. Such schemes depend on saving "credits" for future use, but a persistent data structure may have multiple "futures", ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Amortization has been underutilized in the design of persistent data structures, largely because traditional accounting schemes break down in a persistent setting. Such schemes depend on saving "credits" for future use, but a persistent data structure may have multiple "futures", each competing for the same credits. We describe how lazy evaluation can often remedy this problem, yielding persistent data structures with good amortized efficiency. In fact, such data structures can be implemented purely functionally in any functional language supporting lazy evaluation. As an example of this technique, we present a purely functional (and therefore persistent) implementation of lists that simultaneously support catenation and all other usual list primitives in constant amortized time. This data structure is much simpler than the only existing data structure with comparable bounds, the recently discovered catenable lists of Kaplan and Tarjan, which support all operations in constant worstca...
Auburn: A Kit for Benchmarking Functional Data Structures
 LECTURE NOTES IN COMPUTER SCIENCE
, 1997
"... Benchmarking competing implementations of a data structure can be both tricky and time consuming. The efficiency of an implementation may depend critically on how it is used. This problem is compounded by persistence. All purely functional data structures are persistent. We present a kit that c ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Benchmarking competing implementations of a data structure can be both tricky and time consuming. The efficiency of an implementation may depend critically on how it is used. This problem is compounded by persistence. All purely functional data structures are persistent. We present a kit that can generate benchmarks for a given data structure. A benchmark is made from a description of how it should use an implementation of the data structure. The kit will improve the speed, ease and power of the process of benchmarking functional data structures.
Improved Multiunit Auction Clearing Algorithms with Interval (MultipleChoice) Knapsack Problems
"... Abstract. We study the interval knapsack problem (IKP), and the interval multiplechoice knapsack problem (IMCKP), as generalizations of the classic 0/1 knapsack problem (KP) and the multiplechoice knapsack problem (MCKP), respectively. Compared to singleton items in KP and MCKP, each item i in I ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We study the interval knapsack problem (IKP), and the interval multiplechoice knapsack problem (IMCKP), as generalizations of the classic 0/1 knapsack problem (KP) and the multiplechoice knapsack problem (MCKP), respectively. Compared to singleton items in KP and MCKP, each item i in IKP and IMCKP is represented by a ([ai, bi], pi) pair, where integer interval [ai, bi] specifies the possible range of units, and pi is the unitprice. Our main results are a FPTAS for IKP with time O(nlog n + n/ǫ 2) and a FPTAS for IMCKP with time O(nm/ǫ), and pseudopolynomialtime algorithms for both IKP and IMCKP with time O(nM) and space O(n + M). Here n, m, and M denote number of items, number of item sets, and knapsack capacity respectively. We also present a 2approximation of IKP and a 3approximation of IMCKP both in linear time. We apply IKP and IMCKP to the singlegood multiunit sealedbid auction clearing problem where M identical units of a single good are auctioned. We focus on two bidding models, among them the interval model allows each bid to specify an interval range of units, and XORinterval model allows a bidder to specify a set of mutually exclusive interval bids. The interval and XORinterval bidding models correspond to IKP and IMCKP respectively, thus are solved accordingly. We also show how to compute VCG payments to all the bidders with an overhead of O(log n) factor. Our results for XORinterval bidding model imply improved algorithms for the piecewise constant bidding model studied by Kothari et al. [18], improving their algorithms by a factor of Ω(n). 1
A Replicated and Persistent Functional Programming Environment
"... Traditional database management systems perform updatesinplace and use logs and periodic checkpointing to efficiently achieve atomicity and durability. In this Thesis we shall present a different method, Shades, for achieving atomicity and durability using a copyonwrite policy instead of updates ..."
Abstract
 Add to MetaCart
(Show Context)
Traditional database management systems perform updatesinplace and use logs and periodic checkpointing to efficiently achieve atomicity and durability. In this Thesis we shall present a different method, Shades, for achieving atomicity and durability using a copyonwrite policy instead of updatesinplace. We shall also present index structures and the implementation of Shines, a persistent functional programming language, built on top of Shades. Shades includes realtime generational garbage collection. Realtimeness is achieved by collecting only a small part, a generation, of the database at a time. Contrary to previously presented persistent garbage collection algorithms, Shades has no need to maintain metadata (remembered sets) of intrageneration pointers on disk since the metadata can be reconstructed during recovery. This considerably reduces the amount of disk writing. In conjunction with aggressive commit grouping, efficient index structures, a design specialized to a main memory environment, and a carefully crafted implementation of Shines, we have achieved surprisingly high performance, handsomely beating commercial database management systems.
The Role of Lazy Evaluation in Amortized Data Structures
"... 1 Introduction Functional programmers have long debated the relative merits of strict versus lazy evaluation. Although lazy evaluation has many benefits [11], strict evaluation is clearly superior in at least one area: ease of reasoning about asymptotic complexity. Because of the unpredictable natur ..."
Abstract
 Add to MetaCart
(Show Context)
1 Introduction Functional programmers have long debated the relative merits of strict versus lazy evaluation. Although lazy evaluation has many benefits [11], strict evaluation is clearly superior in at least one area: ease of reasoning about asymptotic complexity. Because of the unpredictable nature of lazy evaluation, it is notoriously difficult to reason about the complexity of algorithms in such a language. However, there are some algorithms based on lazy evaluation that cannot be programmed in (pure) strict languages without an increase in asymptotic complexity. We explore one class of such algorithms amortized data structures and describe techniques for reasoning about their complexity.
Persistent Linked Structures at Constant WorstCase Cost
"... We present a method for making linked structures with nodes of indegree not exceeding 1 partially persistent at a worstcase time cost of O(1) per access step and a worstcase time and space cost of O(1) per update step. The last two improve the best previous result, which gave O(1) amortized bo ..."
Abstract
 Add to MetaCart
(Show Context)
We present a method for making linked structures with nodes of indegree not exceeding 1 partially persistent at a worstcase time cost of O(1) per access step and a worstcase time and space cost of O(1) per update step. The last two improve the best previous result, which gave O(1) amortized bounds on time and space. Our results extend to full persistence. 1 Introduction Making a change to an ordinary data structure destroys the old version, leaving only the new one. Such a structure is said to be ephemeral. With a persistent data structure, on the other hand, old versions are not destroyed, making it possible to access or modify old versions as well as the newest one. A structure is said to be partially persistent if every version can be accessed but only the newest version can be modified and fully persistent if every version can be both accessed and modified. Researchers have devised partially or fully persistent forms for a number of data structures, including stacks [10],...
Sequence Implementations in Haskell
, 1997
"... An abstract data type sequence is defined with the operations empty, isEmpty, cons, snoc, popFront, popRear, lenghtSeq, toList, and toSeq. A sequence with the operations lookupSeq and updateSeq is an Indexable Sequence. A sequence with catenation is called a Catenable Sequence. Some functional imp ..."
Abstract
 Add to MetaCart
(Show Context)
An abstract data type sequence is defined with the operations empty, isEmpty, cons, snoc, popFront, popRear, lenghtSeq, toList, and toSeq. A sequence with the operations lookupSeq and updateSeq is an Indexable Sequence. A sequence with catenation is called a Catenable Sequence. Some functional implementations of these abstract data types taken from the literature are described. These implementations are classified as stacks, deques, flexible arrays, and catenable lists, if they can be used as efficient implementations for each of these traditional data types. Some of them are extended to provide the operations defined for sequences. Some comments and directions for further research are also included. The implementations are made in the functional programming language Haskell as instances of one or more of the classes Sequence, IndSeq, and CatSeq, with the operations defined for each type. These instances are classified by the subset of these operations that each instance supports eff...
Loopless Functional Algorithms
, 2005
"... Loopless algorithms generate successive combinatorial patterns in constant time, producing the first in time linear to the size of input. Although originally formulated in an imperative setting, this thesis proposes a functional interpretation of these algorithms in the lazy language Haskell. Since ..."
Abstract
 Add to MetaCart
Loopless algorithms generate successive combinatorial patterns in constant time, producing the first in time linear to the size of input. Although originally formulated in an imperative setting, this thesis proposes a functional interpretation of these algorithms in the lazy language Haskell. Since it may not be possible to produce a pattern in constant time, a list of integers generated using the library function unfoldr determines the transitions between consecutive patterns. The generation of Gray codes, permutations, ideals of posets and combinations illustrate applications of loopless algorithms in both imperative and functional form, particularly derivations of the KodaRuskey and JohnsonTrotter algorithms. Common themes in the construction of loopless imperative algorithms, such as focus pointers, doubly linked lists and coroutines, contrast greatly with the functional uses of realtime queues, tree traversals, fusion and tupling.