Results 1 
9 of
9
On RAM priority queues
, 1996
"... Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential i ..."
Abstract

Cited by 70 (9 self)
 Add to MetaCart
Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential improvements over previous bounds, and we show tight relations to sorting. Our first result is a RAM priority queue supporting insert and extractmin operations in worst case time O(log log n) where n is the current number of keys in the queue. This is an exponential improvement over the O( p log n) bound of Fredman and Willard from STOC'90. Our algorithm is simple, and it only uses AC 0 operations, meaning that there is no hidden time dependency on the word size. Plugging this priority queue into Dijkstra's algorithm gives an O(m log log m) algorithm for the single source shortest path problem on a graph with m edges, as compared with the previous O(m p log m) bound based on Fredman...
Fast Priority Queues for Cached Memory
 ACM Journal of Experimental Algorithmics
, 1999
"... This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on kway merging. It improves previous external memory algorithm ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on kway merging. It improves previous external memory algorithms by constant factors crucial for transferring it to cached memory. Running in the cache hierarchy of a workstation the algorithm is at least two times faster than an optimized implementation of binary heaps and 4ary heaps for large inputs
WorstCase Efficient ExternalMemory Priority Queues
 In Proc. Scandinavian Workshop on Algorithms Theory, LNCS 1432
, 1998
"... . A priority queue Q is a data structure that maintains a collection of elements, each element having an associated priority drawn from a totally ordered universe, under the operations Insert, which inserts an element into Q, and DeleteMin, which deletes an element with the minimum priority from ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
. A priority queue Q is a data structure that maintains a collection of elements, each element having an associated priority drawn from a totally ordered universe, under the operations Insert, which inserts an element into Q, and DeleteMin, which deletes an element with the minimum priority from Q. In this paper a priorityqueue implementation is given which is efficient with respect to the number of block transfers or I/Os performed between the internal and external memories of a computer. Let B and M denote the respective capacity of a block and the internal memory measured in elements. The developed data structure handles any intermixed sequence of Insert and DeleteMin operations such that in every disjoint interval of B consecutive priorityqueue operations at most c log M=B N M I/Os are performed, for some positive constant c. These I/Os are divided evenly among the operations: if B c log M=B N M , one I/O is necessary for every B=(c log M=B N M )th operation ...
Funnel heap  a cache oblivious priority queue
 In Proc. 13th Annual International Symposium on Algorithms and Computation, volume 2518 of LNCS
, 2002
"... Abstract The cache oblivious model of computation is a twolevel memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multilevel m ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
Abstract The cache oblivious model of computation is a twolevel memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multilevel memory model. Arge et al. recently presented the first optimal cache oblivious priority queue, and demonstrated the importance of this result by providing the first cache oblivious algorithms for graph problems. Their structure uses cache oblivious sorting and selection as subroutines. In this paper, we devise an alternative optimal cache oblivious priority queue based only on binary merging. We also show that our structure can be made adaptive to different usage profiles. 1
Fast Meldable Priority Queues
, 1995
"... We present priority queues that support the operations MakeQueue, FindMin, Insert and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations wh ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We present priority queues that support the operations MakeQueue, FindMin, Insert and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations where Meld takes worst case time o(n).
Lectures on Network Complexity
, 1996
"... ounting arguments that establish upper and lower bounds on the maximum circuit complexity of any n argument Boolean function over the full basis of 2input gates. These and closely related results appear in [4, 12, 23, 25]. The particularly slick proof of Theorem 1.1 is due to Schnorr [20]. ffl S ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
ounting arguments that establish upper and lower bounds on the maximum circuit complexity of any n argument Boolean function over the full basis of 2input gates. These and closely related results appear in [4, 12, 23, 25]. The particularly slick proof of Theorem 1.1 is due to Schnorr [20]. ffl Section 2 uses Turing time complexity T (n) to bound circuit complexity for families of Boolean functions. Savage [18] showed that the circuit complexity is at most O(T (n) 2 ). Here I present a result with Pippenger that reduces this bound to O(T (n)) for oblivious Turing machines and to O(T (n) log T (n)) for unrestricted Turing This research was supported in part by the National Science Foundation under research grant GJ43634x
Space Efficient Fair Queuing by Stochastic Memory Multiplexing
, 1998
"... We propose a new scheme for multiplexing buffer space between flows contending for the same output port. In our scheme messages of the different flows are serviced in either RoundRobin or almost RoundRobin order, thus supporting fair queuing. Our scheme is both simple for hardware implementation a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a new scheme for multiplexing buffer space between flows contending for the same output port. In our scheme messages of the different flows are serviced in either RoundRobin or almost RoundRobin order, thus supporting fair queuing. Our scheme is both simple for hardware implementation and achieves high buffer space utilization (by taking advantage of statistical multiplexing between the flows). Therefore our scheme enjoys from both the simplicity of the hardware queue per flow scheme and the space efficiency of the linkedlist based dynamic scheme, suggesting an attractive compromise between the two extremes. Keywords: Fair Queuing, RoundRobin, Deficit RoundRobin, Weighted Fair Queuing, Switch Architecture, Stochastic Multiplexing 1 Introduction We address the problem of sharing buffer space between backlogged flows that pass through a link. Since the traffic of the flows may be bursty, the link is occasionally congested, and backlogged messages are queued up. The sim...
RankSensitive Priority Queues
"... Abstract. We introduce the ranksensitive priority queue — a data structure that always knows the minimum element it contains, for which insertion and deletion take O(log(n/r)) time, with n being the number of elements in the structure, and r being the rank of the element being inserted or deleted ( ..."
Abstract
 Add to MetaCart
Abstract. We introduce the ranksensitive priority queue — a data structure that always knows the minimum element it contains, for which insertion and deletion take O(log(n/r)) time, with n being the number of elements in the structure, and r being the rank of the element being inserted or deleted (r = 1 for the minimum, r = n for the maximum). We show how several elegant implementations of ranksensitive priority queues can be obtained by applying novel modifications to treaps and amortized balanced binary search trees, and we show that in the comparison model, the bounds above are essentially the best possible. Finally, we conclude with a case study on the use of ranksensitive priority queues for shortest path computation. 1
grant GJ43634x to M.I.T.Lectures on Network Complexity
, 1977
"... These notes, often referred to as the “Frankfurt Lecture Notes”, are perhaps my most widely circulated unpublished work. Resulting from a series of lectures I gave at the University of Frankfurt in June of 1974, they summarize some early work on what is now known as circuit complexity. They circulat ..."
Abstract
 Add to MetaCart
These notes, often referred to as the “Frankfurt Lecture Notes”, are perhaps my most widely circulated unpublished work. Resulting from a series of lectures I gave at the University of Frankfurt in June of 1974, they summarize some early work on what is now known as circuit complexity. They circulated originally in the form of xeroxes of my handwritten notes. Later, in April of 1977, I revised the notes and had them typed up; copies of the typewritten version have also circulated widely. Now, with the availability of the worldwide web, I have decided to reissue them once again, this time in electronic form. In going over the notes again, I have tried to preserve the original style and to resist the temptation to make “improvements ” to either the content or its presentation. Nevertheless, I have fixed a few technical errors and have added a few missing assumptions here and there. I have also added a bibliography that was not present in the original. While I make no claims to its completeness, I have tried to add citations for the results referenced in the original notes, as well as giving references to a few related subsequent works. Even today, more that 20 years after the original lectures, readers may still find some material here of interest: • Section 1 presents counting arguments that establish upper and lower bounds on the maximum circuit complexity of any nargument Boolean function over the full basis of 2input gates. These and closely related results appear in [4, 12, 23, 25]. The particularly slick proof of Theorem 1.1 is due to Schnorr [20]. • Section 2 uses Turing time complexity T(n) to bound circuit complexity for families of Boolean functions. Savage [18] showed that the circuit complexity is at most O(T(n) 2). Here I present a result with Pippenger that reduces this bound to O(T(n)) for oblivious Turing machines and to O(T(n)log T(n)) for unrestricted Turing