Results 11  20
of
22
Error Correcting Codes, Perfect Hashing Circuits, and Deterministic Dynamic Dictionaries
, 1997
"... We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clus ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clustering. We use
Transdichotomous Results in Computational Geometry, I: Point Location in Sublogarithmic Time
, 2008
"... Given a planar subdivision whose coordinates are integers bounded by U ≤ 2 w, we present a linearspace data structure that can answer point location queries in O(min{lg n / lg lg n, √ lg U/lg lg U}) time on the unitcost RAM with word size w. Thisisthe first result to beat the standard Θ(lg n) bou ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Given a planar subdivision whose coordinates are integers bounded by U ≤ 2 w, we present a linearspace data structure that can answer point location queries in O(min{lg n / lg lg n, √ lg U/lg lg U}) time on the unitcost RAM with word size w. Thisisthe first result to beat the standard Θ(lg n) bound for infinite precision models. As a consequence, we obtain the first o(n lg n) (randomized) algorithms for many fundamental problems in computational geometry for arbitrary integer input on the word RAM, including: constructing the convex hull of a threedimensional point set, computing the Voronoi diagram or the Euclidean minimum spanning tree of a planar point set, triangulating a polygon with holes, and finding intersections among a set of line segments. Higherdimensional extensions and applications are also discussed. Though computational geometry with bounded precision input has been investigated for a long time, improvements have been limited largely to problems of an orthogonal flavor. Our results surpass this longstanding limitation, answering, for example, a question of Willard (SODA’92).
A TradeOff For WorstCase Efficient Dictionaries
"... We consider dynamic dictionaries over the universe U = {0, 1}^w on a unitcost RAM with word size w and a standard instruction set, and present a linear space deterministic dictionary accommodating membership queries in time (log log n)^O(1) and updates in time (log n)^O(1), where n is the size of t ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider dynamic dictionaries over the universe U = {0, 1}^w on a unitcost RAM with word size w and a standard instruction set, and present a linear space deterministic dictionary accommodating membership queries in time (log log n)^O(1) and updates in time (log n)^O(1), where n is the size of the set stored. Previous solutions either had query time (log n) 18 or update time 2 !( p log n) in the worst case.
Persistent Predecessor Search and Orthogonal Point Location on the Word RAM
"... We answer a basic data structuring question (for example, raised by Dietz and Raman back in SODA 1991): can van Emde Boas trees be made persistent, without changing their asymptotic query/update time? We present a (partially) persistent data structure that supports predecessor search in a set of int ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We answer a basic data structuring question (for example, raised by Dietz and Raman back in SODA 1991): can van Emde Boas trees be made persistent, without changing their asymptotic query/update time? We present a (partially) persistent data structure that supports predecessor search in a set of integers in {1,..., U} under an arbitrary sequence of n insertions and deletions, with O(log log U) expected query time and expected amortized update time, and O(n) space. The query bound is optimal in U for linearspace structures and improves previous nearO((log log U) 2) methods. The same method solves a fundamental problem from computational geometry: point location in orthogonal planar subdivisions (where edges are vertical or horizontal). We obtain the first static data structure achieving O(log log U) worstcase query time and linear space. This result is again optimal in U for linearspace structures and improves the previous O((log log U) 2) method by de Berg, Snoeyink, and van Kreveld (1992). The same result also holds for higherdimensional subdivisions that are orthogonal binary space partitions, and for certain nonorthogonal planar subdivisions such as triangulations without small angles. Many geometric applications follow, including improved query times for orthogonal range reporting for dimensions ≥ 3 on the RAM. Our key technique is an interesting new vanEmdeBoas–style recursion that alternates between two strategies, both quite simple.
Dynamic 3sided Planar Range Queries with Expected Doubly Logarithmic Time
 Proceedings of ISAAC, 2009
"... Abstract. We consider the problem of maintaining dynamically a set of points in the plane and supporting range queries of the type [a, b] × (−∞, c]. We assume that the inserted points have their xcoordinates drawn from a class of smooth distributions, whereas the ycoordinates are arbitrarily distr ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We consider the problem of maintaining dynamically a set of points in the plane and supporting range queries of the type [a, b] × (−∞, c]. We assume that the inserted points have their xcoordinates drawn from a class of smooth distributions, whereas the ycoordinates are arbitrarily distributed. The points to be deleted are selected uniformly at random among the inserted points. For the RAM model, we present a linear space data structure that supports queries in O(log log n + t) expected time with high probability and updates in O(log log n) expected amortized time, where n is the number of points stored and t is the size of the output of the query. For the I/O model we support queries in O(log log B n + t/B) expected I/Os with high probability and updates in O(log B log n) expected amortized I/Os using linear space, where B is the disk block size. The data structures are deterministic and the expectation is with respect to the input distribution. 1
Efficient IP table lookup via adaptive stratified trees with selective reconstructions. 12th European Symp
 on Algorithms
"... IP address lookup is a critical operation for high bandwidth routers in packet switching networks such as Internet. The lookup is a nontrivial operation since it requires searching for the longest prefix, among those stored in a (large) given table, matching the IP address. Ever increasing routing ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
IP address lookup is a critical operation for high bandwidth routers in packet switching networks such as Internet. The lookup is a nontrivial operation since it requires searching for the longest prefix, among those stored in a (large) given table, matching the IP address. Ever increasing routing tables size, traffic volume and links speed demand new and more efficient algorithms. Moreover, the imminent move to IPv6 128bit addresses will soon require a rethinking of previous technical choices. This article describes a the new data structure for solving the IP table look up problem christened the Adaptive Stratified Tree (AST). The proposed solution is based on casting the problem in geometric terms and on repeated application of efficient local geometric optimization routines. Experiments with this approach have shown that in terms of storage, query time and update time the AST is at a par with state of the art algorithms based on data compression or string manipulations (and often it is better on some of the measured quantities).
Fast String Sorting Using OrderPreserving Compression
"... We give experimental evidence for the benefits of orderpreserving compression in sorting algorithms. While, in general, any algorithm might benefit from compressed data because of reduced paging requirements, we identified two natural candidates that would further benefit from orderpreserving compr ..."
Abstract
 Add to MetaCart
We give experimental evidence for the benefits of orderpreserving compression in sorting algorithms. While, in general, any algorithm might benefit from compressed data because of reduced paging requirements, we identified two natural candidates that would further benefit from orderpreserving compression, namely stringoriented sorting algorithms and wordRAM algorithms for keys of bounded length. The wordRAM model has some of the fastest known sorting algorithms in practice. These algorithms are designed for keys of bounded length, usually 32 or 64 bits, which limits their direct applicability for strings. One possibility is to use an orderpreserving compression scheme, so that a boundedkeylength algorithm can be applied. For the case of standard algorithms, we took what is considered to be the among the fastest nonword RAM string sorting algorithms, Fast MKQSort, and measured its performance on compressed data. The Fast MKQSort algorithm of Bentley and Sedgewick is optimized to handle text strings. Our experiments show that ordercompression techniques results in savings of approximately 15 % over the same algorithm on noncompressed data. For the wordRAM, we modified Andersson’s sorting algorithm to handle variablelength keys. The resulting algorithm is faster than the standard Unix sort by a factor of 1.5X. Last, we used an orderpreserving scheme that is within a constant additive term
Direct routing on trees (Extended Abstract)
 In Proceedings of the Ninth Annual ACMSIAM Symposium on Discrete Algorithms (SODA 98
, 1998
"... We consider offline permutation routing on trees. We are particularly interested in direct tree routing schedules where packets once started move directly towards their destination. The scheduling of start times ascertains that no two packets will use the same edge in the same direction in the same ..."
Abstract
 Add to MetaCart
We consider offline permutation routing on trees. We are particularly interested in direct tree routing schedules where packets once started move directly towards their destination. The scheduling of start times ascertains that no two packets will use the same edge in the same direction in the same time step. In O(n log n log log n) time and O(n log n) space, we construct a direct tree routing schedule guaranteed to complete the routing within the general optimum of n \Gamma 1 steps. In addition, our scheme guarantees that at most two packets arrive at the same node in the same time step. Furthermore, if the length of the route of a given packet is d and the maximum number of other routes intersecting the route in a single node is k then the packet arrives to its destination within d + k steps. 1 Introduction In this paper, we consider offline hotpotato permutation packet routing on trees. We are given a permutation ß of the nodes, and for each node v, we want to send a packet fr...
Arne Andersson
"... We show that a unitcost RAM with a word length of w bits can sort n integers in the range 0 : : 2 w \Gamma 1 in O(n log log n) time, for arbitrary w log n, a significant improvement over the bound of O(n p log n) achieved by the fusion trees of Fredman and Willard. Provided that w (log n) 2+ ..."
Abstract
 Add to MetaCart
We show that a unitcost RAM with a word length of w bits can sort n integers in the range 0 : : 2 w \Gamma 1 in O(n log log n) time, for arbitrary w log n, a significant improvement over the bound of O(n p log n) achieved by the fusion trees of Fredman and Willard. Provided that w (log n) 2+ffl for some fixed ffl ? 0, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unitcost PRAM with a word length of w bits. The first one yields an algorithm that uses O(log n) time and O(n log log n) operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses O(log n) expected time and O(n) expected operations on a randomized EREW PRAM, provided that w (log n) 2+ffl for some fixed ffl ? 0. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multipleprecision integers represented in several words...
Implementation and Performance Analysis of Exponential Tree Sorting
"... The traditional algorithm for sorting gives a bound of expected time without randomization and with randomization. Recent researches have optimized lower bound for deterministic algorithms for integer sorting [13]. Andersson has given the idea of Exponential tree which can be used for sorting [4]. ..."
Abstract
 Add to MetaCart
The traditional algorithm for sorting gives a bound of expected time without randomization and with randomization. Recent researches have optimized lower bound for deterministic algorithms for integer sorting [13]. Andersson has given the idea of Exponential tree which can be used for sorting [4]. Andersson, Hagerup, Nilson and Raman have given an algorithm which sorts n integers in expected time but uses space [4, 5]. Andersson has given improved algorithm which sort integers in expected time and linear space but uses randomization [2, 4]. Yijie Han has improved further to sort integers in expected time and linear space but passes integers in a batch i.e. all integers at a time [6]. These algorithms are very complex to implement. In this paper we discussed a way to implement the exponential tree sorting and later compare results with traditional sorting technique.