Results 1  10
of
16
A skip list cookbook
, 1990
"... Skip lists are a probabilistic data structure that seem likely to supplant balanced trees as the implementation method of choice for many applications. Skip list algorithms have the same asymptotic expected time bounds as balanced trees and are simpler, faster and use less space. The original paper ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Skip lists are a probabilistic data structure that seem likely to supplant balanced trees as the implementation method of choice for many applications. Skip list algorithms have the same asymptotic expected time bounds as balanced trees and are simpler, faster and use less space. The original paper on skip lists only presented algorithms for search, insertion and deletion. In this paper, we show that skip lists are as versatile as balanced trees. We describe and analyze algorithms to use search fingers, merge, split and concatenate skip lists, and implement linear list operations using skip lists. The skip list algorithms for these actions are faster and simpler than their balanced tree cousins. The merge algorithm for skip lists we describe has better asymptotic time complexity than any previously described merge algorithm for balanced trees.
Dijkstra’s algorithm with Fibonacci heaps: An executable description
 in CHR. In 20th Workshop on Logic Programming (WLP’06
, 2006
"... Abstract. We construct a readable, compact and efficient implementation of Dijkstra’s shortest path algorithm and Fibonacci heaps using Constraint Handling Rules (CHR), which is increasingly used as a highlevel rulebased generalpurpose programming language. We measure its performance in different ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
Abstract. We construct a readable, compact and efficient implementation of Dijkstra’s shortest path algorithm and Fibonacci heaps using Constraint Handling Rules (CHR), which is increasingly used as a highlevel rulebased generalpurpose programming language. We measure its performance in different CHR systems, investigating both the theoretical asymptotic complexity and the constant factors realized in practice. 1
SelfOrganizing Data Structures
 In
, 1998
"... . We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized online algorithms. For binary search trees, we present results for both online and offline algorithms. Selforganizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental results. 1 Introduction This paper surveys results in the design and analysis of selforganizing data structures for the search problem. The general search problem in pointer data structures can be phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and selforganizati...
Optimal Purely Functional Priority Queues
 JOURNAL OF FUNCTIONAL PROGRAMMING
, 1996
"... Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert, and meld in O(1) worstcase time, and deleteMin in O(log n) worstcase time. These bounds are asymptotically optimal among all comparisonbased priority queues. In this paper, we adapt B ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert, and meld in O(1) worstcase time, and deleteMin in O(log n) worstcase time. These bounds are asymptotically optimal among all comparisonbased priority queues. In this paper, we adapt Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time. Specifically, we derive our implementation from binomial queues in three steps: first, we reduce the running time of insert to O(1) by eliminating the possibility of cascading links; second, we reduce the running time of findMin to O(1) by adding a global root to hold the minimum element; and finally, we reduce the running time of meld to O(1) by allowing priority queues to contain other priority queues. Each of these steps is expressed using MLstyle functors. The last transformation, known as datastructural bootstrapping, is an interesting application of higherorder functors and recursive structures.
A clusterbased approach to tracking, detection and segmentation of broadcast news
 In Proceedings of the DARPA Broadcast News Workshop
, 1999
"... We present results of the University of Iowa topic tracking and detection as well as story segmentation efforts. Topic tracking is performed for the “boundaries given ” case. The DET curves for all the runs are consistently smooth and concave suggesting no sudden changes in expectation required from ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We present results of the University of Iowa topic tracking and detection as well as story segmentation efforts. Topic tracking is performed for the “boundaries given ” case. The DET curves for all the runs are consistently smooth and concave suggesting no sudden changes in expectation required from the user. The effect of reducing the training size of relevant stories is examined. The detection runs are performed using a “pipeline ” model to utilize the advantage of the deferral period. Performance is strongly influenced by the fact that roughly 2000 to 3000 declared topic clusters are generated during the detection runs. Performance is analyzed with respect to changing the cluster threshold. In segmentation, an agglomerative clustering strategy is adopted. The decision to declare a boundary depends upon both lexical similarity of neighboring segments as well as the pause duration. The algorithmic complexity of the method is O(k log k) where k is the number of pause delimited sentences in the file. The tracking, detection and segmentation modules provide a sound framework for future extension and experimentation.
Weight Biased Leftist Trees and Modified Skip Lists
 Journal of Experimetnal Algorithmics
, 1996
"... this paper, we are concerned primarily with the insert and delete min operations. The different data structures that have been proposed for the representation of a priority queue differ in terms of the performance guarantees they provide. Some guarantee good performance on a per operation basis whil ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
this paper, we are concerned primarily with the insert and delete min operations. The different data structures that have been proposed for the representation of a priority queue differ in terms of the performance guarantees they provide. Some guarantee good performance on a per operation basis while others do this only in the amortized sense. Heaps permit one to delete the min element and insert an arbitrary element into an n element priority queue in O(logn) time per operation; a find min takes O(1) time. Additionally, a heap is an implicit data structure that has no storage overhead associated with it. All other priority queue structures are pointerbased and so require additional storage for the pointers. Leftist trees also support the insert and delete min operations in O(log n) time per operation and the find min operation in O(1) time. Additionally, they permit us to meld pairs of priority queues in logarithmic time
Portable Distributed Priority Queues with MPI
, 1995
"... Part of this work has been presented in [17]. This paper analyzes the performances of portable distributed priority queues by examining the theoretical features required and by comparing various implementations. In spite of intrinsic bottlenecks and induced hotspots, we argue that tree topologies a ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Part of this work has been presented in [17]. This paper analyzes the performances of portable distributed priority queues by examining the theoretical features required and by comparing various implementations. In spite of intrinsic bottlenecks and induced hotspots, we argue that tree topologies are attractive to manage the natural centralized control required for the deletemin operation in order to detect the site which holds the item with the largest priority. We introduce an original perfect balancing to cope with the load variation due to the priority queue operations which continuously modify the overall number of items in the network. For comparison, we introduce the dheap and the binomial distributed priority queue. The purpose of this experiment is to convey, through executions on CrayT3D and MeikoT800, an understanding of the nature of the distributed priority queues, the range of their concurrency and a comparison of their efficiency to reduce requests latency. In particu...
A Linear Time Algorithm for the k Maximal Sums Problem
"... Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm f ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm for the k maximal sums problem. We use this algorithm to obtain algorithms solving the twodimensional k maximal sums problem in O(m 2 ·n+k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the ddimensional problem in O(n 2d−1 +k) time. The space usage of all the algorithms can be reduced to O(n d−1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space. 1
A Performance and Scalability Analysis Framework for Parallel Discrete Event Simulators
 J. Cryptology
, 1992
"... The development of efficient parallel discrete event simulators is hampered by the large number of interrelated factors affecting performance. This problem is made more difficult by the lack of scalable representative models that can be used to analyze optimizations and isolate bottlenecks. This pap ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The development of efficient parallel discrete event simulators is hampered by the large number of interrelated factors affecting performance. This problem is made more difficult by the lack of scalable representative models that can be used to analyze optimizations and isolate bottlenecks. This paper proposes a performance and scalabilty analysis framework (PSAF) for parallel discrete event simulators. PSAF is built on a platformindependent workload specification language (WSL). WSL is a language that represents simulation models using a set of fundamental performancecritical parameters. For each simulator under study, a WSL translator generates synthetic platformspecific simulation models that conform to the performance and scalability characteristics specified by the WSL description. Moreover, sets of portable simulation models that explore the effects of the different parameters, individually or collectively, on the execution performance can easily be constructed using the synthetic workload generator (SWG). The SWG automatically generates simulation workloads with different performance properties. In addition, PSAF supports the seamless integration of real simulation models into the workload specification. Thus, a benchmark with both real and synthetically generated models can be built allowing for realistic and thorough exploration of the performance space. The utility of PSAF in determining the boundaries of performance and scalability of simulation environments and models is demonstrated.