Results 1 
8 of
8
On RAM priority queues
, 1996
"... Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential i ..."
Abstract

Cited by 70 (9 self)
 Add to MetaCart
Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential improvements over previous bounds, and we show tight relations to sorting. Our first result is a RAM priority queue supporting insert and extractmin operations in worst case time O(log log n) where n is the current number of keys in the queue. This is an exponential improvement over the O( p log n) bound of Fredman and Willard from STOC'90. Our algorithm is simple, and it only uses AC 0 operations, meaning that there is no hidden time dependency on the word size. Plugging this priority queue into Dijkstra's algorithm gives an O(m log log m) algorithm for the single source shortest path problem on a graph with m edges, as compared with the previous O(m p log m) bound based on Fredman...
Error Correcting Codes, Perfect Hashing Circuits, and Deterministic Dynamic Dictionaries
, 1997
"... We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clus ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clustering. We use
Computation of the Least Significant Set Bit
 In Proceedings of the 2nd Electrotechnical and Computer Science Conference, Portoroz
, 1993
"... We investigate the problem of computing of the least significant set bit in a word. We describe a constant time algorithm for the problem assuming a cell probe model of computation. 1 Introduction and Definitions The problem of computing an index of the least significant set bit in a word arises in ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We investigate the problem of computing of the least significant set bit in a word. We describe a constant time algorithm for the problem assuming a cell probe model of computation. 1 Introduction and Definitions The problem of computing an index of the least significant set bit in a word arises in different issues of data organization such as bit representation of ordered sets etc. In this paper we describe a novel algorithm to compute the index. Our algorithm runs on a cell probe model of computation (c.f. [2, 4, 5]), which is a generalization of a random access machine model. We assume that the memory registers are of bounded size. The bits in the word (register) are enumerated from 0, which is the least significant bit, to m \Gamma 1, which is the most significant bit  the word is m bits wide. Using the terminology of Fredman and Saks [2] we are dealing with CPROB(m) model. Furthermore, we assume that we can perform in one unit of time arithmetic operations of multiplication, add...
TransDichotomous Algorithms Without Multiplication  Some Upper and Lower Bounds
, 1997
"... . We show that on a RAM with addition, subtraction, bitwise Boolean operations and shifts, but no multiplication, there is a transdichotomous solution to the static dictionary problem using linear space and with query time p log n(log log n) 1+o(1) . On the way, we show that two wbit words can ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
. We show that on a RAM with addition, subtraction, bitwise Boolean operations and shifts, but no multiplication, there is a transdichotomous solution to the static dictionary problem using linear space and with query time p log n(log log n) 1+o(1) . On the way, we show that two wbit words can be multiplied in time (log w) 1+o(1) and that time \Omega (log w) is necessary, and that \Theta(log log w) time is necessary and sufficient for identifying the least significant set bit of a word. 1 Introduction Consider a problem (like sorting or searching) whose instances consists of collections of members of the universe U = f0; 1g w of wbit bit strings (or numbers between 0 and 2 w \Gamma 1). An increasingly popular theoretical model for studying such problems is the transdichotomous model of computation [13, 14, 1, 7, 8, 3, 2, 20, 18, 9, 4, 21, 6], where one assumes a random access machine where each register is capable of holding exactly one element of the universe, i.e. we...
Lower bounds for static dictionaries on RAMs with bit operations but no multiplication
 In Automata, languages and programming (Paderborn
, 1996
"... . We consider solving the static dictionary problem with n keys from the universe f0; : : : ; m \Gamma 1g on a RAM with direct and indirect addressing, conditional jump, addition, bitwise Boolean operations, and arbitrary shifts (a Practical RAM). For any ffl ? 0, tries yield constant query time us ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
. We consider solving the static dictionary problem with n keys from the universe f0; : : : ; m \Gamma 1g on a RAM with direct and indirect addressing, conditional jump, addition, bitwise Boolean operations, and arbitrary shifts (a Practical RAM). For any ffl ? 0, tries yield constant query time using space m ffl , provided that n = m o(1) . We show that this is essentially optimal: Any scheme with constant query time requires space m ffl for some ffl ? 0, even if n (log m) 2 . 1 Introduction The static dictionary problem is the following: Given a subset S of size n of the universe U = f0; : : : ; m \Gamma 1g, store it as a data structure OE S in the memory of a unit cost random access machine, using few memory registers, each containing O(logm) bits, so that membership queries "Is x 2 S?" can be answered efficiently for any value of x. The set S can be stored as a sorted table using n memory registers. Then queries can be answered using binary search in O(logn) time. Yao...
Arne Andersson
"... We show that a unitcost RAM with a word length of w bits can sort n integers in the range 0 : : 2 w \Gamma 1 in O(n log log n) time, for arbitrary w log n, a significant improvement over the bound of O(n p log n) achieved by the fusion trees of Fredman and Willard. Provided that w (log n) 2+ ..."
Abstract
 Add to MetaCart
We show that a unitcost RAM with a word length of w bits can sort n integers in the range 0 : : 2 w \Gamma 1 in O(n log log n) time, for arbitrary w log n, a significant improvement over the bound of O(n p log n) achieved by the fusion trees of Fredman and Willard. Provided that w (log n) 2+ffl for some fixed ffl ? 0, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unitcost PRAM with a word length of w bits. The first one yields an algorithm that uses O(log n) time and O(n log log n) operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses O(log n) expected time and O(n) expected operations on a randomized EREW PRAM, provided that w (log n) 2+ffl for some fixed ffl ? 0. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multipleprecision integers represented in several words...
Equivalence Between Sorting and . . .
, 1995
"... For a RAM with arbitrary word size, it is shown that if we can sort n integers, each contained in one word, in time n \Delta s(n), then (and only then) there is a priority queue with capacity for n integers, supporting findmin in constant time and insert and delete in s(n) +O(1) amortized time. Her ..."
Abstract
 Add to MetaCart
For a RAM with arbitrary word size, it is shown that if we can sort n integers, each contained in one word, in time n \Delta s(n), then (and only then) there is a priority queue with capacity for n integers, supporting findmin in constant time and insert and delete in s(n) +O(1) amortized time. Here it is required that when we insert a key, it is not smaller than the current smallest key. The equivalence holds even if n is limited in terms of the word size w. One application is an O(n p log n 1+" + m) algorithm for the single source shortest path problem on a graph with n nodes and m edges.