Results 11  20
of
21
SpaceEfficient DataAnalysis Queries on Grids
"... Abstract. We consider various dataanalysis queries on twodimensional points. We give new space/time tradeoffs over previous work on semigroup and group queries such as sum, average, variance, minimum and maximum. We also introduce new solutions to queries rarely considered in the literature such a ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. We consider various dataanalysis queries on twodimensional points. We give new space/time tradeoffs over previous work on semigroup and group queries such as sum, average, variance, minimum and maximum. We also introduce new solutions to queries rarely considered in the literature such as twodimensional quantiles, majorities, successor/predecessor and mode queries. We face static and dynamic scenarios. 1
Relaxed weak queues: an alternative to runrelaxed heaps
, 2005
"... Abstract. A simplification of a runrelaxed heap, called a relaxed weak queue, is presented. This new priorityqueue implementation supports all operations as efficiently as the original: findmin, insert, and decrease (also called decreasekey) in O(1) worstcase time, and delete in O(lg n) worstc ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. A simplification of a runrelaxed heap, called a relaxed weak queue, is presented. This new priorityqueue implementation supports all operations as efficiently as the original: findmin, insert, and decrease (also called decreasekey) in O(1) worstcase time, and delete in O(lg n) worstcase time, n denoting the number of elements stored prior to the operation. These time bounds are valid on a pointer machine as well as on a randomaccess machine. A relaxed weak queue is a collection of at most ⌊lg n ⌋ + 1 perfect weak heaps, where there are in total at most ⌊lg n ⌋ + 1 nodes that may violate weakheap order. In a pointerbased representation of a perfect weak heap, which is a binary tree, it is enough to use two pointers per node to record parentchild relationships. Due to decrease, each node must store one additional pointer. The auxiliary data structures maintained to keep track of perfect weak heaps and potential violation nodes only require O(lg n) words of storage. That is, excluding the space used by the elements themselves, the total space usage of a relaxed weak queue can be as low as 3n + O(lg n) words. ACM CCS Categories and Subject Descriptors. E.1 [Data Structures]: Lists, stacks, and queues; E.2 [Data Storage Representations]: Linked representations;
A Note on Worst Case Efficient Meldable Priority Queues
, 1996
"... We give a simple implementation of meldable priority queues, achieving Insert , Find min, and Meld in O(1) worst case time, and Delete min and Delete in O(log n) worst case time. 1 Introduction The implementation of priority queues is a classic problem in computer science. The fundamental operatio ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We give a simple implementation of meldable priority queues, achieving Insert , Find min, and Meld in O(1) worst case time, and Delete min and Delete in O(log n) worst case time. 1 Introduction The implementation of priority queues is a classic problem in computer science. The fundamental operations are Insert and Delete min, but various extra operations, such as Find min, Meld, Delete, and Decrease key , have been considered. Fibonacci heaps [6] support all of these, in O(log n) time for the Delete min and Delete operations, and O(1) for the rest. These bounds are, however, only amortized. Some earlier proposals [4, 5, 8] achieve such bounds in the worst case sense for various subsets of the operations supported by Fibonacci heaps. None of these subsets includes the meld operation. This has been remedied recently by Brodal, who has given worst case solutions, first [2] for the set Insert, Delete min, Find min, Meld, and Delete, and later [3] for the full set of Fibonacci heap opera...
The complexity of implicit and spaceefficient priority queues
 Proceedings of the 9th Workshop on Algorithms and Data Structures, Lecture
"... Abstract. In this paper we study the timespace complexity of implicit priority queues supporting the decreasekey operation. Our first result is that by using one extra word of storage it is possible to match the performance of Fibonacci heaps: constant amortized time for insert and decreasekey and ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. In this paper we study the timespace complexity of implicit priority queues supporting the decreasekey operation. Our first result is that by using one extra word of storage it is possible to match the performance of Fibonacci heaps: constant amortized time for insert and decreasekey and logarithmic time for deletemin. Our second result is a lower bound showing that that one extra word really is necessary. We reduce the decreasekey operation to a cellprobe type game called the Usher's Problem, where one must maintain a simple data structure without the aid of any auxiliary storage.
Putting your data structure on a diet
 In preparation (2006). [Ask Jyrki for details
, 2007
"... Abstract. Consider a data structure D that stores a dynamic collection of elements. Assume that D uses a linear number of words in addition to the elements stored. In this paper several datastructural transformations are described that can be used to transform D into another data structure D ′ that ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. Consider a data structure D that stores a dynamic collection of elements. Assume that D uses a linear number of words in addition to the elements stored. In this paper several datastructural transformations are described that can be used to transform D into another data structure D ′ that supports the same operations as D, has considerably smaller memory overhead than D, and performs the supported operations by a small constant factor or a small additive term slower than D, depending on the data structure and operation in question. The compaction technique has been successfully applied for linked lists, dictionaries, and priority queues.
A Generalization of Binomial Queues
 Information Processing Letters
, 1996
"... We give a generalization of binomial queues involving an arbitrary sequence (mk )k=0;1;2;::: of integers greater than one. Different sequences lead to different worst case bounds for the priority queue operations, allowing the user to adapt the data structure to the needs of a specific application. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We give a generalization of binomial queues involving an arbitrary sequence (mk )k=0;1;2;::: of integers greater than one. Different sequences lead to different worst case bounds for the priority queue operations, allowing the user to adapt the data structure to the needs of a specific application. Examples include the first priority queue to combine a sublogarithmic worst case bound for Meld with a sublinear worst case bound for Delete min. Keywords: Data structures; Meldable priority queues. 1 Introduction The binomial queue, introduced in 1978 by Vuillemin [14], is a data structure for meldable priority queues. In meldable priority queues, the basic operations are insertion of a new item into a queue, deletion of the item having minimum key in a queue, and melding of two queues into a single queue. The binomial queue is one of many data structures which support these operations at a worst case cost of O(logn) for a queue of n items. Theoretical [2] and empirical [9] evidence i...
Priority Queues and Sorting for ReadOnly Data
"... Abstract. We revisit the randomaccessmachine model in which the input is given on a readonly randomaccess media, the output is to be produced to a writeonly sequentialaccess media, and in addition there is a limited randomaccess workspace. The length of the input is N elements, the length of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We revisit the randomaccessmachine model in which the input is given on a readonly randomaccess media, the output is to be produced to a writeonly sequentialaccess media, and in addition there is a limited randomaccess workspace. The length of the input is N elements, the length of the output is limited by the computation itself, and the capacity of the workspace is O(S + w) bits,whereS is a parameter specified by the user and w is the number of bits per machine word. We present a stateoftheart priority queue—called an adjustable navigation pile—for this model. Under some reasonable assumptions, our priority queue supports minimum and insert in O(1) worstcase time and extract in O(N/S +lgS) worstcase time, where lg N ≤ S ≤ N / lg N. We also show how to use this data structure to simplify the existing optimal O(N 2 /S + N lg S)time sorting algorithm for this model. 1
The Randomized Complexity of Maintaining the Minimum
, 1996
"... The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n=(e2 2t ) \Gamma 1 comparisons for Fin ..."
Abstract
 Add to MetaCart
The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n=(e2 2t ) \Gamma 1 comparisons for FindMin. If FindMin is replaced by a weaker operation, FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given. 1 Introduction We consider the complexity of maintaining a set S of elements from a totally ordered universe under the following operations: Insert(e): inserts the element e into S, Delete(e): removes from S the element e provided it is known where e is stored, and FindMin: returns the minimum element in S without removing it. We refer to this problem as the InsertDeleteFindMi...
Sorting Multisets Stably in Minimum Space
, 1994
"... We consider the problem of sorting a multiset of size n containing m distinct elements, where the ith distinct element appears n i times. Under the assumption that our model of computation allows only the operations of comparing elements and moving elements in the memory, \Omega (n log n \Gamm ..."
Abstract
 Add to MetaCart
We consider the problem of sorting a multiset of size n containing m distinct elements, where the ith distinct element appears n i times. Under the assumption that our model of computation allows only the operations of comparing elements and moving elements in the memory, \Omega (n log n \Gamma P m i=1 n i log n i + n) is known to be a lower bound for the computational complexity of the sorting problem. In this paper we present a minimum space algorithm that sorts stably a multiset in asymptotically optimal worstcase time. A Quicksort type approach is used, where at each recursive step the median is chosen as the partitioning element. To obtain a stable minimum space implementation, we develop lineartime inplace algorithms for the following problems, which have interest of their own: Stable unpartitioning: Assume that an nelement array A is stably partitioned into two subarrays A0 and A1 . The problem is to recover A from its constituents A0 and A1 . The information available is the partitioning element used and a bit array of size n indicating whether an element of A0 or A1 was originally in the corresponding position of A.
Strict Fibonacci Heaps
"... Wepresentthefirstpointerbasedheapimplementationwith time bounds matching those of Fibonacci heaps in the worst case. We support makeheap, insert, findmin, meld and decreasekey in worstcase O(1) time, and delete and deletemin in worstcase O(lgn) time, where n is the size of the heap. The data s ..."
Abstract
 Add to MetaCart
Wepresentthefirstpointerbasedheapimplementationwith time bounds matching those of Fibonacci heaps in the worst case. We support makeheap, insert, findmin, meld and decreasekey in worstcase O(1) time, and delete and deletemin in worstcase O(lgn) time, where n is the size of the heap. The data structure uses linear space. A previous, very complicated, solution achieving the same time bounds in the RAM model made essential use of arrays and extensive use of redundant counter schemes to maintain balance. Our solution uses neither. Our key simplification is to discard the structure of the smaller heap when doing a meld. We use the pigeonhole principle in place of the redundant counter mechanism.