Results 1  10
of
28
Skipnet: A scalable overlay network with practical locality properties
, 2003
"... Abstract: Scalable overlay networks such as Chord, Pastry, and Tapestry have recently emerged as a flexible infrastructure for building large peertopeer systems. In practice, two disadvantages of such systems are that it is difficult to control where data is stored and difficult to guarantee that ..."
Abstract

Cited by 294 (5 self)
 Add to MetaCart
Abstract: Scalable overlay networks such as Chord, Pastry, and Tapestry have recently emerged as a flexible infrastructure for building large peertopeer systems. In practice, two disadvantages of such systems are that it is difficult to control where data is stored and difficult to guarantee that routing paths remain within an administrative domain. SkipNet is a scalable overlay network that provides controlled data placement and routing locality guarantees by organizing data primarily by lexicographic key ordering. SkipNet also allows for both finegrained and coarsegrained control over data placement, where content can be placed either on a predetermined node or distributed uniformly across the nodes of a hierarchical naming subtree. An additional useful consequence of SkipNet’s locality properties is that partition failures, in which an entire organization disconnects from the rest of the system, result in two disjoint, but wellconnected overlay networks. 1
Understanding tradeoffs in software transactional memory
 IN PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION
, 2007
"... There has been a flurry of recent work on the design of high performance software and hybrid hardware/software transactional memories (STMs and HyTMs). This paper reexamines the design decisions behind several of these stateoftheart algorithms, adopting some ideas, rejecting others, all in an atte ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
There has been a flurry of recent work on the design of high performance software and hybrid hardware/software transactional memories (STMs and HyTMs). This paper reexamines the design decisions behind several of these stateoftheart algorithms, adopting some ideas, rejecting others, all in an attempt to make STMs faster. We created the transactional locking (TL) framework of STM algorithms and used it to conduct a range of comparisons of the performance of nonblocking, lockbased, and Hybrid STM algorithms versus finegrained handcrafted ones. We were able to make several illuminating observations regarding lock acquisition order, the interaction of STMs with memory management schemes, and the role of overheads and abort rates in STM performance.
The Interval Skip List: A Data Structure for Finding All Intervals That Overlap a Point
 In Proc. of the 2nd Workshop on Algorithms and Data Structures
, 1992
"... A problem that arises in computational geometry, pattern matching, and other applications is the need to quickly determine which of a collection of intervals overlap a point. Requests of this type are called stabbing queries. A recently discovered randomized data structure called the skip list can ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
A problem that arises in computational geometry, pattern matching, and other applications is the need to quickly determine which of a collection of intervals overlap a point. Requests of this type are called stabbing queries. A recently discovered randomized data structure called the skip list can maintain ordered sets efficiently, just as balanced binary search trees can, but is much simpler to implement than balanced trees. This paper introduces an extension of the skip list called the interval skip list, or ISlist, to support interval indexing. The ISlist allows stabbing queries and dynamic insertion and deletion of intervals. A stabbing query using an ISlist containing n intervals takes an expected time of O(log n). Inserting or deleting an interval in an ISlist takes an expected time of O(log 2 n) if the interval endpoints are chosen from a continuous distribution. Moreover, the ISlist inherits much of the simplicity of the skip list  it can be implemented in a relativ...
Selection Predicate Indexing for Active Databases Using Interval Skip Lists
 INFORMATION SYSTEMS
, 1996
"... A new, efficient selection predicate indexing scheme for active database systems is introduced. The selection predicate index proposed uses an interval index on an attribute of a relation or object collection when one or more rule condition clauses are defined on that attribute. The selection pre ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
A new, efficient selection predicate indexing scheme for active database systems is introduced. The selection predicate index proposed uses an interval index on an attribute of a relation or object collection when one or more rule condition clauses are defined on that attribute. The selection predicate index uses a new type of interval index called the interval skip list (ISlist). The ISlist is designed to allow efficient retrieval of all intervals that overlap a point, while allowing dynamic insertion and deletion of intervals. ISlist algorithms are described in detail. The ISlist allows efficient online searches, insertions, and deletions, yet is much simpler to implement than other comparable interval index data structures such as the priority search tree and balanced interval binary search tree (IBStree). ISlists require only one third as much code to implement as balanced IBStrees. The combination of simplicity, performance, and dynamic updateability of the ISli...
Computational bounds on hierarchical data processing with applications to information security
 In Proc. Int. Colloquium on Automata, Languages and Programming (ICALP), volume 3580 of LNCS
, 2005
"... Motivated by the study of algorithmic problems in the domain of information security, in this paper, we study the complexity of a new class of computations over a collection of values associated with a set of n elements. We introduce hierarchical data processing (HDP) problems which involve the comp ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
Motivated by the study of algorithmic problems in the domain of information security, in this paper, we study the complexity of a new class of computations over a collection of values associated with a set of n elements. We introduce hierarchical data processing (HDP) problems which involve the computation of a collection of output values from an input set of n elements, where the entire computation is fully described by a directed acyclic graph (DAG). That is, individual computations are performed and intermediate values are processed according to the hierarchy induced by the DAG. We present an Ω(log n) lower bound on various computational cost measures for HDP problems. Essential in our study is an analogy that we draw between the complexities of any HDP problem of size n and searching by comparison in an order set of n elements, which shows an interesting connection between the two problems. In view of the logarithmic lower bounds, we also develop a new randomized DAG scheme for HDP problems that provides close to optimal performance and achieves cost measures with constant factors of the (logarithmic) leading asymptotic term that are close to optimal. Our lower bounds are general, apply to all HDP problems and, along with our new DAG construction, they provide an interesting –as well as useful in the area of algorithm analysis – theoretical framework. We apply our results to two information security problems, data authentication through cryptographic hashing and multicast key distribution using keygraphs and get a unified analysis and treatment for these problems. We show that both problems involve HDP and prove logarithmic lower bounds on their computational and communication costs. In particular, using our new DAG scheme, we present a new efficient authenticated dictionary with improved authentication overhead over previously known schemes. Moreover, through the relation between HDP and searching by comparison, we present a new skiplist version where the expected number of comparisons in a search is 1.25log 2 n + O(1). 1
Analysis of an Optimized Search Algorithm for Skip Lists
 Theoretical Computer Science
, 1994
"... It was suggested in [8] to avoid redundant queries in the skip list search algorithm by marking those elements whose key has already been checked by the search algorithm. We present here a precise analysis of the total search cost (expectation and variance), where the cost of the search is measured ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
It was suggested in [8] to avoid redundant queries in the skip list search algorithm by marking those elements whose key has already been checked by the search algorithm. We present here a precise analysis of the total search cost (expectation and variance), where the cost of the search is measured in terms of the number of keyto key comparison.These results are then compared with the corresponding values of the standard search algorithm. 1 Introduction Skip lists have recently been introduced as a type of listbased data structure that may substitute search trees [9]. A set of n elements is stored in a collection of sorted linear linked lists in the following manner: all elements are stored in increasing order in a linked list called level 1 and, recursively, each element which appears in the linked list level i is included with independent probability q (0 ! q ! 1) in the linked list level i + 1. The level of an element x is the number of linked lists it belongs to. For each elemen...
Algorithm Design and Software Libraries: Recent Developments in the LEDA Project
 IN PROC. IFIP 12TH WORLD COMPUTER CONGRESS
, 1992
"... LEDA (Library of Efficient Data Types and Algorithms) is an ongoing project which aims to build a library of the efficient data structures and algorithms used in combinatorial computing [12]. We discuss three recent aspects of the project: The cost of flexibility, implementation parameters, and a ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
LEDA (Library of Efficient Data Types and Algorithms) is an ongoing project which aims to build a library of the efficient data structures and algorithms used in combinatorial computing [12]. We discuss three recent aspects of the project: The cost of flexibility, implementation parameters, and augmented trees.
Spaceefficient finger search on degreebalanced search trees
 In SODA
, 2003
"... We show how to support the finger search operation on degreebalanced search trees in a spaceefficient manner that retains a worstcase time bound of O(log d), where d is the difference in rank between successive search targets. While most existing treebased designs allocate linear extra storage i ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We show how to support the finger search operation on degreebalanced search trees in a spaceefficient manner that retains a worstcase time bound of O(log d), where d is the difference in rank between successive search targets. While most existing treebased designs allocate linear extra storage in the nodes (e.g., for side links and parent pointers), our design maintains a compact auxiliary data structure called the “hand ” during the lifetime of the tree and imposes no other storage requirement within the tree. The hand requires O(log n) space for an nnode tree and has a relatively simple structure. It can be updated synchronously during insertions and deletions with time proportional to the number of structural changes in the tree. The auxiliary nature of the hand also makes it possible to introduce finger searches into any existing implementation without modifying the underlying data representation (e.g., any implementation of RedBlack trees can be used). Together these factors make finger searches more appealing in practice. Our design also yields a simple yet optimal inorder walk algorithm with worstcase O(1) work per increment (again without any extra storage requirement in the nodes), and we believe our algorithm can be used in database applications when the overall performance is very sensitive to retrieval latency. 1
Compressed perfect embedded skip lists for quick invertedindex lookups
 In Proc. SPIRE 2005, Lecture Notes in Computer Science
, 2005
"... Large inverted indices are by now common in the construction of webscale search engines. For faster access, inverted indices are indexed internally so that it is possible to skip quickly over unnecessary documents. The classical approach to skipping dictates that a skip should be positioned every √ ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Large inverted indices are by now common in the construction of webscale search engines. For faster access, inverted indices are indexed internally so that it is possible to skip quickly over unnecessary documents. The classical approach to skipping dictates that a skip should be positioned every √ f document pointers, where f is the overall number of documents where the term appears. We argue that due to the growing size of the web more refined techniques are necessary, and describe how to embed a compressed perfect skip list in an inverted list. We provide statistical models that explain the empirical distribution of the skip data we observe in our experiments, and use them to devise good compression techniques that allow us to limit the waste in space, so that the resulting data structure increases the overall index size by just a few percents, still making it possible to index pointers with a rather fine granularity. 1
Worst case optimal unionintersection expression evaluation
 In Proceedings of the 32nd International Colloquium on Automata, Languages and Programming (ICALP ’05), volume 3580 of Lecture Notes in Computer Science
, 2005
"... addresses: ..."