Results 1  10
of
34
Skip Lists: A Probabilistic Alternative to Balanced Trees
, 1990
"... Skip lists are data structures thla t use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees. ..."
Abstract

Cited by 410 (1 self)
 Add to MetaCart
Skip lists are data structures thla t use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees.
Skipnet: A scalable overlay network with practical locality properties
, 2003
"... Abstract: Scalable overlay networks such as Chord, Pastry, and Tapestry have recently emerged as a flexible infrastructure for building large peertopeer systems. In practice, two disadvantages of such systems are that it is difficult to control where data is stored and difficult to guarantee that ..."
Abstract

Cited by 364 (5 self)
 Add to MetaCart
(Show Context)
Abstract: Scalable overlay networks such as Chord, Pastry, and Tapestry have recently emerged as a flexible infrastructure for building large peertopeer systems. In practice, two disadvantages of such systems are that it is difficult to control where data is stored and difficult to guarantee that routing paths remain within an administrative domain. SkipNet is a scalable overlay network that provides controlled data placement and routing locality guarantees by organizing data primarily by lexicographic key ordering. SkipNet also allows for both finegrained and coarsegrained control over data placement, where content can be placed either on a predetermined node or distributed uniformly across the nodes of a hierarchical naming subtree. An additional useful consequence of SkipNet’s locality properties is that partition failures, in which an entire organization disconnects from the rest of the system, result in two disjoint, but wellconnected overlay networks. 1
Understanding tradeoffs in software transactional memory
 IN PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION
, 2007
"... There has been a flurry of recent work on the design of high performance software and hybrid hardware/software transactional memories (STMs and HyTMs). This paper reexamines the design decisions behind several of these stateoftheart algorithms, adopting some ideas, rejecting others, all in an atte ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
There has been a flurry of recent work on the design of high performance software and hybrid hardware/software transactional memories (STMs and HyTMs). This paper reexamines the design decisions behind several of these stateoftheart algorithms, adopting some ideas, rejecting others, all in an attempt to make STMs faster. We created the transactional locking (TL) framework of STM algorithms and used it to conduct a range of comparisons of the performance of nonblocking, lockbased, and Hybrid STM algorithms versus finegrained handcrafted ones. We were able to make several illuminating observations regarding lock acquisition order, the interaction of STMs with memory management schemes, and the role of overheads and abort rates in STM performance.
Computational bounds on hierarchical data processing with applications to information security
 In Proc. Int. Colloquium on Automata, Languages and Programming (ICALP), volume 3580 of LNCS
, 2005
"... Motivated by the study of algorithmic problems in the domain of information security, in this paper, we study the complexity of a new class of computations over a collection of values associated with a set of n elements. We introduce hierarchical data processing (HDP) problems which involve the comp ..."
Abstract

Cited by 28 (16 self)
 Add to MetaCart
(Show Context)
Motivated by the study of algorithmic problems in the domain of information security, in this paper, we study the complexity of a new class of computations over a collection of values associated with a set of n elements. We introduce hierarchical data processing (HDP) problems which involve the computation of a collection of output values from an input set of n elements, where the entire computation is fully described by a directed acyclic graph (DAG). That is, individual computations are performed and intermediate values are processed according to the hierarchy induced by the DAG. We present an Ω(log n) lower bound on various computational cost measures for HDP problems. Essential in our study is an analogy that we draw between the complexities of any HDP problem of size n and searching by comparison in an order set of n elements, which shows an interesting connection between the two problems. In view of the logarithmic lower bounds, we also develop a new randomized DAG scheme for HDP problems that provides close to optimal performance and achieves cost measures with constant factors of the (logarithmic) leading asymptotic term that are close to optimal. Our lower bounds are general, apply to all HDP problems and, along with our new DAG construction, they provide an interesting –as well as useful in the area of algorithm analysis – theoretical framework. We apply our results to two information security problems, data authentication through cryptographic hashing and multicast key distribution using keygraphs and get a unified analysis and treatment for these problems. We show that both problems involve HDP and prove logarithmic lower bounds on their computational and communication costs. In particular, using our new DAG scheme, we present a new efficient authenticated dictionary with improved authentication overhead over previously known schemes. Moreover, through the relation between HDP and searching by comparison, we present a new skiplist version where the expected number of comparisons in a search is 1.25log 2 n + O(1). 1
Selection Predicate Indexing for Active Databases Using Interval Skip Lists
 INFORMATION SYSTEMS
, 1996
"... A new, efficient selection predicate indexing scheme for active database systems is introduced. The selection predicate index proposed uses an interval index on an attribute of a relation or object collection when one or more rule condition clauses are defined on that attribute. The selection pre ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
A new, efficient selection predicate indexing scheme for active database systems is introduced. The selection predicate index proposed uses an interval index on an attribute of a relation or object collection when one or more rule condition clauses are defined on that attribute. The selection predicate index uses a new type of interval index called the interval skip list (ISlist). The ISlist is designed to allow efficient retrieval of all intervals that overlap a point, while allowing dynamic insertion and deletion of intervals. ISlist algorithms are described in detail. The ISlist allows efficient online searches, insertions, and deletions, yet is much simpler to implement than other comparable interval index data structures such as the priority search tree and balanced interval binary search tree (IBStree). ISlists require only one third as much code to implement as balanced IBStrees. The combination of simplicity, performance, and dynamic updateability of the ISli...
The Interval Skip List: A Data Structure for Finding All Intervals That Overlap a Point
 In Proc. of the 2nd Workshop on Algorithms and Data Structures
, 1992
"... A problem that arises in computational geometry, pattern matching, and other applications is the need to quickly determine which of a collection of intervals overlap a point. Requests of this type are called stabbing queries. A recently discovered randomized data structure called the skip list can ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
A problem that arises in computational geometry, pattern matching, and other applications is the need to quickly determine which of a collection of intervals overlap a point. Requests of this type are called stabbing queries. A recently discovered randomized data structure called the skip list can maintain ordered sets efficiently, just as balanced binary search trees can, but is much simpler to implement than balanced trees. This paper introduces an extension of the skip list called the interval skip list, or ISlist, to support interval indexing. The ISlist allows stabbing queries and dynamic insertion and deletion of intervals. A stabbing query using an ISlist containing n intervals takes an expected time of O(log n). Inserting or deleting an interval in an ISlist takes an expected time of O(log 2 n) if the interval endpoints are chosen from a continuous distribution. Moreover, the ISlist inherits much of the simplicity of the skip list  it can be implemented in a relativ...
Fast Set Intersection in Memory
"... Set intersection is a fundamental operation in information retrieval and database systems. This paper introduces linear space data structures to represent sets such that their intersection can be computed in a worstcase efficient way. In general, given k (preprocessed) sets, with totally n elements ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Set intersection is a fundamental operation in information retrieval and database systems. This paper introduces linear space data structures to represent sets such that their intersection can be computed in a worstcase efficient way. In general, given k (preprocessed) sets, with totally n elements, we will show how to compute their intersection in expected time O(n / √ w + kr), where r is the intersection size and w is the number of bits in a machineword. In addition,we introduce a very simple version of this algorithm that has weaker asymptotic guarantees but performs even better in practice; both algorithms outperform the state of the art techniques for both synthetic and real data sets and workloads. 1.
Skiplistbased Concurrent Priority Queues
, 2000
"... This paper addresses the problem of designing scalable concurrent priority queues for large scale multiprocessors – machines with up to several hundred processors. Priority queues are fundamental in the design of modern multiprocessor algorithms, with many classical applications ranging from numeric ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
This paper addresses the problem of designing scalable concurrent priority queues for large scale multiprocessors – machines with up to several hundred processors. Priority queues are fundamental in the design of modern multiprocessor algorithms, with many classical applications ranging from numerical algorithms through discrete event simulation and expert systems. While highly scalable approaches have been introduced for the special case of queues with a fixed set of priorities, the most efficient designs for the general case are based on the parallelization of the heap data structure. Though numerous intricate heapbased schemes have been suggested in the literature, their scalability seems to be limited to small machines in the range of ten to twenty processors. This paper proposes an alternative approach: to base the design of concurrent priority queues on the probabilistic skiplist data structure, rather than on a heap. To this end, we show that a concurrent skiplist structure, following a simple set of modifications, provides a concurrent priority queue with a higher level of parallelism and significantly less contention than the fastest known heapbased algorithms. Our initial empirical evidence, collected on a simulated 256 node shared memory multiprocessor architecture similar to the MIT Alewife, suggests that the new skiplist based priority queue algorithm scales significantly better than heap based schemes throughout most of the concurrency range. With 256 processors, they are about 3 times faster in performing deletions and up to 10 times faster in performing insertions.
Compressed perfect embedded skip lists for quick invertedindex lookups
 In Proc. SPIRE 2005, Lecture Notes in Computer Science
, 2005
"... Large inverted indices are by now common in the construction of webscale search engines. For faster access, inverted indices are indexed internally so that it is possible to skip quickly over unnecessary documents. The classical approach to skipping dictates that a skip should be positioned every √ ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Large inverted indices are by now common in the construction of webscale search engines. For faster access, inverted indices are indexed internally so that it is possible to skip quickly over unnecessary documents. The classical approach to skipping dictates that a skip should be positioned every √ f document pointers, where f is the overall number of documents where the term appears. We argue that due to the growing size of the web more refined techniques are necessary, and describe how to embed a compressed perfect skip list in an inverted list. We provide statistical models that explain the empirical distribution of the skip data we observe in our experiments, and use them to devise good compression techniques that allow us to limit the waste in space, so that the resulting data structure increases the overall index size by just a few percents, still making it possible to index pointers with a rather fine granularity. 1
Analysis of an Optimized Search Algorithm for Skip Lists
 Theoretical Computer Science
, 1994
"... It was suggested in [8] to avoid redundant queries in the skip list search algorithm by marking those elements whose key has already been checked by the search algorithm. We present here a precise analysis of the total search cost (expectation and variance), where the cost of the search is measured ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
It was suggested in [8] to avoid redundant queries in the skip list search algorithm by marking those elements whose key has already been checked by the search algorithm. We present here a precise analysis of the total search cost (expectation and variance), where the cost of the search is measured in terms of the number of keyto key comparison.These results are then compared with the corresponding values of the standard search algorithm. 1 Introduction Skip lists have recently been introduced as a type of listbased data structure that may substitute search trees [9]. A set of n elements is stored in a collection of sorted linear linked lists in the following manner: all elements are stored in increasing order in a linked list called level 1 and, recursively, each element which appears in the linked list level i is included with independent probability q (0 ! q ! 1) in the linked list level i + 1. The level of an element x is the number of linked lists it belongs to. For each elemen...