Results 1  10
of
152
Skip Lists: A Probabilistic Alternative to Balanced Trees
, 1990
"... Skip lists are data structures thla t use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees. ..."
Abstract

Cited by 411 (1 self)
 Add to MetaCart
Skip lists are data structures thla t use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees.
Certificate Revocation and Certificate Update
 USENIX SECURITY SYMPOSIUM
, 1998
"... A new solution is suggested for the problem of certificate revocation. This solution represents Certificate Revocation Lists by an authenticated search data structure. The process of verifying whether a certificate is in the list or not, as well as updating the list, is made very efficient. The sugg ..."
Abstract

Cited by 180 (1 self)
 Add to MetaCart
A new solution is suggested for the problem of certificate revocation. This solution represents Certificate Revocation Lists by an authenticated search data structure. The process of verifying whether a certificate is in the list or not, as well as updating the list, is made very efficient. The suggested solution gains in scalability, communication costs, robustness to parameter changes and update rate. Comparisons to the following solutions are included: 'traditional' CRLs (Certificate Revocation Lists), Micali's Certificate Revocation System (CRS) and Kocher's Certificate Revocation Trees (CRT).
Finally, a scenario in which certificates are not revoked, but frequently issued for shortterm periods is considered. Based on the authenticated search data structure scheme, a certificate update scheme is presented in which all certificates are updated by a common message.
The suggested solutions for certificate revocation and certificate update problems is better than current solutions with respect to communication costs, update rate, and robustness to changes in parameters and is compatible e.g. with X.500 certificates.
Denial of Service via Algorithmic Complexity Attacks
, 2003
"... We present a new class of lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have "averagecase" expected running time that's far more efficient than the worst case. For examp ..."
Abstract

Cited by 142 (2 self)
 Add to MetaCart
We present a new class of lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have "averagecase" expected running time that's far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.
Sampling From a Moving Window Over Streaming Data
 IN SODA
, 2002
"... We introduce the problem of sampling from a moving window of recent items from a data stream and develop the "chainsample" and "prioritysample" algorithms for this problem. ..."
Abstract

Cited by 123 (7 self)
 Add to MetaCart
(Show Context)
We introduce the problem of sampling from a moving window of recent items from a data stream and develop the "chainsample" and "prioritysample" algorithms for this problem.
Topologically Sweeping Visibility Complexes via Pseudotriangulations
, 1996
"... This paper describes a new algorithm for constructing the set of free bitangents of a collection of n disjoint convex obstacles of constant complexity. The algorithm runs in time O(n log n + k), where k is the output size, and uses O(n) space. While earlier algorithms achieve the same optimal run ..."
Abstract

Cited by 85 (9 self)
 Add to MetaCart
This paper describes a new algorithm for constructing the set of free bitangents of a collection of n disjoint convex obstacles of constant complexity. The algorithm runs in time O(n log n + k), where k is the output size, and uses O(n) space. While earlier algorithms achieve the same optimal running time, this is the first optimal algorithm that uses only linear space. The visibility graph or the visibility complex can be computed in the same time and space. The only complicated data structure used by the algorithm is a splittable queue, which can be implemented easily using redblack trees. The algorithm is conceptually very simple, and should therefore be easy to implement and quite fast in practice. The algorithm relies on greedy pseudotriangulations, which are subgraphs of the visibility graph with many nice combinatorial properties. These properties, and thus the correctness of the algorithm, are partially derived from properties of a certain partial order on the faces of th...
A localitypreserving cacheoblivious dynamic dictionary
, 2002
"... This paper presents a simple dictionary structure designed for a hierarchical memory. The proposed data structure is cache oblivious and locality preserving. A cacheoblivious data structure has memory performance optimized for all levels of the memory hierarchy even though it has no memoryhierar ..."
Abstract

Cited by 71 (22 self)
 Add to MetaCart
This paper presents a simple dictionary structure designed for a hierarchical memory. The proposed data structure is cache oblivious and locality preserving. A cacheoblivious data structure has memory performance optimized for all levels of the memory hierarchy even though it has no memoryhierarchyspecific parameterization. A localitypreserving dictionary maintains elements of similar key values stored close together for fast access to ranges of data with consecutive keys. The data structure presented here is a simplification of the cacheoblivious Btree of Bender, Demaine, and FarachColton. Like the cacheoblivious Btree, this structure supports search operations using only O(logB N) block operations at a level of the memory hierarchy with block size B. Insertion and deletion operations use O(logB N + log2 N=B) amortized block transfers. Finally, the data structure returns all k data items in a given search range using O(logB N + kB) block operations. This data structure was implemented and its performance was evaluated on a simulated memory hierarchy. This paper presents the results of this simulation for various combinations of block and memory sizes.
Proximity Problems on Moving Points
 In Proc. 13th Annu. ACM Sympos. Comput. Geom
, 1997
"... A kinetic data structure for the maintenance of a multidimensional range search tree is introduced. This structure is used as a building block to obtain kinetic data structures for two classical geometric proximity problems in arbitrary dimensions: the first structure maintains the closest pair o ..."
Abstract

Cited by 54 (16 self)
 Add to MetaCart
(Show Context)
A kinetic data structure for the maintenance of a multidimensional range search tree is introduced. This structure is used as a building block to obtain kinetic data structures for two classical geometric proximity problems in arbitrary dimensions: the first structure maintains the closest pair of a set of continuously moving points, and is provably e#cient. The second structure maintains a spanning tree of the moving points whose cost remains within some prescribed factor of the minimum spanning tree. The method for maintaining the closest pair of points can be extended to the maintenance of closest pair of other distance functions which allows us to maintain the closest pair of a set of moving objects with similar sizes and of a set of points on a smooth manifold.
Estimating Rarity and Similarity over Data Stream Windows
 In Proceedings of 10th Annual European Symposium on Algorithms, volume 2461 of Lecture Notes in Computer Science
, 2002
"... In the windowed data stream model, we observe items coming in over time. At any time t, we consider the window of the last N observations a t\Gamma(N \Gamma1) ; a t\Gamma(N \Gamma2) ; : : : ; a t , each a i 2 f1; : : : ; ug; we are allowed to ask queries about the data in the window, say, we wish to ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
(Show Context)
In the windowed data stream model, we observe items coming in over time. At any time t, we consider the window of the last N observations a t\Gamma(N \Gamma1) ; a t\Gamma(N \Gamma2) ; : : : ; a t , each a i 2 f1; : : : ; ug; we are allowed to ask queries about the data in the window, say, we wish to compute the minimum or the median of the items in the window. A crucial restriction is that we are only allowed o(N) (often polylogarithmic in N) storage space, that is, space smaller than the window size, so the items within the window can not be archived. Window data stream model arose out of the need to formally reason about the underlying data analyses problems in applications like internetworking and transactions processing.
SelfAdjusting Computation
 In ACM SIGPLAN Workshop on ML
, 2005
"... From the algorithmic perspective, we describe novel data structures for tracking the dependences ina computation and a changepropagation algorithm for adjusting computations to changes. We show that the overhead of our dependence tracking techniques is O(1). To determine the effectiveness of change ..."
Abstract

Cited by 49 (19 self)
 Add to MetaCart
(Show Context)
From the algorithmic perspective, we describe novel data structures for tracking the dependences ina computation and a changepropagation algorithm for adjusting computations to changes. We show that the overhead of our dependence tracking techniques is O(1). To determine the effectiveness of changepropagation, we present an analysis technique, called trace stability, and apply it to a number of applications.